With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with bo...Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a...Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.展开更多
Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication sce...Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.展开更多
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
With the evolution of 5th generation(5G)and 6th generation(6G)wireless communication technologies,various Internet of Things(IoT)devices and artificial intelligence applications are proliferating,putting enormous pres...With the evolution of 5th generation(5G)and 6th generation(6G)wireless communication technologies,various Internet of Things(IoT)devices and artificial intelligence applications are proliferating,putting enormous pressure on existing computing power networks.Unmanned aerial vehicle(UAV)-enabled mobile edge computing(U-MEC)shows potential to alleviate this pressure and has been recognized as a new paradigm for responding to data explosion.Nevertheless,the conflict between computing demands and resource-constrained UAVs poses a great challenge.Recently,researchers have proposed resource management solutions in U-MEC for computing tasks with dependency.However,the repeatability among the tasks was ignored.In this paper,considering repeatability and dependency,we propose a U-MEC paradigm based on a computing power pool for processing computationally intensive tasks,in which UAVs can share information and computing resources.To ensure the effectiveness of computing power pool construction,the problem of balancing the energy consumption of UAVs is formulated through joint optimization of an offloading strategy,task scheduling,and resource allocation.To address this NP-hard problem,we adopt a two-stage alternate optimization algorithm based on successive convex approximation(SCA)and an improved genetic algorithm(GA).The simulation results show that the proposed scheme reduces time consumption by 18.41%and energy consumption by 21.68%on average,which can improve the working efficiency of UAVs.展开更多
Considering the privacy challenges of secure storage and controlled flow,there is an urgent need to realize a decentralized ecosystem of private blockchain for cyberspace.A collaboration dilemma arises when the partic...Considering the privacy challenges of secure storage and controlled flow,there is an urgent need to realize a decentralized ecosystem of private blockchain for cyberspace.A collaboration dilemma arises when the participants are self-interested and lack feedback of complete information.Traditional blockchains have similar faults,such as trustlessness,single-factor consensus,and heavily distributed ledger,preventing them from adapting to the heterogeneous and resource-constrained Internet of Things.In this paper,we develop the game-theoretic design of a two-sided rating with complete information feedback to stimulate collaborations for private blockchain.The design consists of an evolution strategy of the decision-making network and a computing power network for continuously verifiable proofs.We formulate the optimum rating and resource scheduling problems as two-stage iterative games between participants and leaders.We theoretically prove that the Stackelberg equilibrium exists and the group evolution is stable.Then,we propose a multi-stage evolution consensus with feedback on a block-accounting workload for metadata survival.To continuously validate a block,the metadata of the optimum rating,privacy,and proofs are extracted to store on a lightweight blockchain.Moreover,to increase resource utilization,surplus computing power is scheduled flexibly to enhance security by degrees.Finally,the evaluation results show the validity and efficiency of our model,thereby solving the collaboration dilemma in the private blockchain.展开更多
With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such ...With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such as de-signalization.How to effectively manage autonomous vehicles for traffic control with high throughput at unsignalized intersections while ensuring safety has been a research hotspot.This paper proposes a collision-free autonomous vehicle scheduling framework based on edge-cloud computing power networks for unsignalized intersections where the lanes entering the intersections are undirectional,and designs an efficient communication system and protocol.First,by analyzing the collision point occupation time,this paper formulates an absolute value programming problem.Second,this problem is solved with low complexity by the Edge Intelligence Optimal Entry Time(EI-OET)algorithm based on edge-cloud computing power support.Then,the communication system and protocol are designed for the proposed scheduling scheme to realize efficient and low-latency vehicular communications.Finally,simulation experiments compare the proposed scheduling framework with directional and traditional traffic light scheduling mechanisms,and the experimental results demonstrate its high efficiency,low latency,and low complexity.展开更多
With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more d...With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.展开更多
In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the r...In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.展开更多
Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing comp...Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing computational power,massive datasets,and collaborative global research.This special issue of Emerging Artificial Intelligence Technologies and Applications was conceived to provide a platformfor cuttingedge AI research communication,developing novel methodologies,cross-domain applications,and critical advancements in addressing real-world challenges.Over the past months,we have witnessed a remarkable diversity of submissions,reflecting the global trend of AI innovation.Below,we synthesize the key insights from these works,highlighting their collective contribution to advancing AI’s theoretical frontiers and practical applications.展开更多
With the growing demand for deep integration between computing power networks(CPNs)and energy systems(ESs),effective collaboration between these systems has become increasingly crucial.To facilitate such integration,t...With the growing demand for deep integration between computing power networks(CPNs)and energy systems(ESs),effective collaboration between these systems has become increasingly crucial.To facilitate such integration,this paper proposes an energy-computing integrated system(ECIS),which consists of a four-layer framework including a physical layer,a networked digital twin layer,a service layer,and a communication layer—each interdependent and playing a distinct role.The ECIS enables the global dynamic scheduling and optimisation of electric power and computing power resources.We provide a detailed overview of the functions and interactions within the four layers of the ECIS,discussing the potential of ECIS to enhance resource utilisation,support green and low-carbon development,and improve system flexibility.By fostering efficient collaboration between power and computing resources,the proposed four-layer framework of ECIS can significantly improve operational efficiency.Furthermore,we explore potential challenges in implementing ECIS and outline future research directions to address these challenges.展开更多
This article investigates the dynamic relationship between technology and AI(artificial intelligence)and the role that societal requirements play in pushing AI research and adoption.Technology has advanced dramaticall...This article investigates the dynamic relationship between technology and AI(artificial intelligence)and the role that societal requirements play in pushing AI research and adoption.Technology has advanced dramatically throughout the years,providing the groundwork for the rise of AI.AI systems have achieved incredible feats in various disciplines thanks to advancements in computer power,data availability,and complex algorithms.On the other hand,society’s needs for efficiency,enhanced healthcare,environmental sustainability,and personalized experiences have worked as powerful accelerators for AI’s progress.This article digs into how technology empowers AI and how societal needs dictate its progress,emphasizing their symbiotic relationship.The findings underline the significance of responsible AI research,which considers both technological prowess and ethical issues,to ensure that AI continues to serve the greater good.展开更多
The International Union of Geological Sciences(IUGS)is evaluating whether there are additional geoscientific activities that would be beneficial in helping mitigate the impacts of tsunami.Public concerns about poor de...The International Union of Geological Sciences(IUGS)is evaluating whether there are additional geoscientific activities that would be beneficial in helping mitigate the impacts of tsunami.Public concerns about poor decisions and inaction,and advances in computing power and data mining call for new scientific approaches.Three fundamental requirements for mitigating impacts of natural hazards are defined.These are:(1)improvement of process-oriented understanding,(2)adequate monitoring and optimal use of data,and(3)generation of advice based on scientific,technical and socio-economic expertise.International leadership/coordination is also important.展开更多
Modern computer systems are increasingly bounded by the available or permissible power at multiple layers from individual components to data centers.To cope with this reality,it is necessary to understand how power bo...Modern computer systems are increasingly bounded by the available or permissible power at multiple layers from individual components to data centers.To cope with this reality,it is necessary to understand how power bounds im-pact performance,especially for systems built from high-end nodes,each consisting of multiple power hungry components.Because placing an inappropriate power bound on a node or a component can lead to severe performance loss,coordinat-ing power allocation among nodes and components is mandatory to achieve desired performance given a total power bud-get.In this article,we describe the paradigm of power bounded high-performance computing,which considers coordinated power bound assignment to be a key factor in computer system performance analysis and optimization.We apply this paradigm to the problem of power coordination across multiple layers for both CPU and GPU computing.Using several case studies,we demonstrate how the principles of balanced power coordination can be applied and adapted to the inter-play of workloads,hardware technology,and the available total power for performance improvement.展开更多
Computing power,algorithms and data are the core factors for the digital economy.According to figures from the Ministry of Industry and Information Technology,as of the end of June,China had built 10.85 million standa...Computing power,algorithms and data are the core factors for the digital economy.According to figures from the Ministry of Industry and Information Technology,as of the end of June,China had built 10.85 million standard racks,and its intelligent computing power had reached 788 EFLOPS,an indicator of system speed equaling 1 quintillion floatingpoint calculations per second.China tops the global large language model count with 1,509 models having been released.展开更多
This book elaborates on the megatrends in the evolution of Al technology,the breakthrough of computing power-driven computing systems,and how Al can empower life sciences,the internet of Things,autonomous driving,and ...This book elaborates on the megatrends in the evolution of Al technology,the breakthrough of computing power-driven computing systems,and how Al can empower life sciences,the internet of Things,autonomous driving,and more.It also systematically analyzes the problems and risks behind the rapid development of artificial intelligence,as well as the countermeasures of governments,enterprises,and scientific research communities at home and abroad.Finally,this book looks forward to the future technological development trends and innovation paths of the entire industry,the opportunities and challenges of China in it,the roles and responsibilities of industry,academia,and research in the fourth industrial revolution,talent training,and global scientific and technological exchanges in the Al era,etc.展开更多
1 Introduction With rapid development in computing power and breakthroughs in deep learning,the concept of“foundation models”has been introduced into the AI community.Generally,foundation models are large models tra...1 Introduction With rapid development in computing power and breakthroughs in deep learning,the concept of“foundation models”has been introduced into the AI community.Generally,foundation models are large models trained on massive data and can be easily adapted to different domains for various tasks.With specific prompts,foundation models can generate texts and images,or even animate scenarios based on the given descriptions.Due to powerful capabilities,there is a growing trend to build agents based on foundation models.In this paper,we conduct an investigation into agents empowered by the foundation models.展开更多
1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascens...1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.展开更多
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
文摘Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金partly funded by MOST Major Research and Development Project(Grant No 2021YFB2900204)Natural Science Foundation of China(Grant No 62132004)+1 种基金Sichuan Major R&D Project(Grant No 22QYCX0168)the Key Research and Development Program of Zhejiang Province(Grant No 2022C01093)。
文摘Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.
基金supported by the National Key R&D Program of China(No.2021YFB2900200)。
文摘Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
基金supported by the Natural Science Foundation of Jiangsu Province,China(No.BK20211227)the National Natural Science Foundation of China(No.62273356)。
文摘With the evolution of 5th generation(5G)and 6th generation(6G)wireless communication technologies,various Internet of Things(IoT)devices and artificial intelligence applications are proliferating,putting enormous pressure on existing computing power networks.Unmanned aerial vehicle(UAV)-enabled mobile edge computing(U-MEC)shows potential to alleviate this pressure and has been recognized as a new paradigm for responding to data explosion.Nevertheless,the conflict between computing demands and resource-constrained UAVs poses a great challenge.Recently,researchers have proposed resource management solutions in U-MEC for computing tasks with dependency.However,the repeatability among the tasks was ignored.In this paper,considering repeatability and dependency,we propose a U-MEC paradigm based on a computing power pool for processing computationally intensive tasks,in which UAVs can share information and computing resources.To ensure the effectiveness of computing power pool construction,the problem of balancing the energy consumption of UAVs is formulated through joint optimization of an offloading strategy,task scheduling,and resource allocation.To address this NP-hard problem,we adopt a two-stage alternate optimization algorithm based on successive convex approximation(SCA)and an improved genetic algorithm(GA).The simulation results show that the proposed scheme reduces time consumption by 18.41%and energy consumption by 21.68%on average,which can improve the working efficiency of UAVs.
基金supported by the National Key R&D Program of China under Grant No.2021YFB3101904 and the fund under Grant No.2021JCJQQT075。
文摘Considering the privacy challenges of secure storage and controlled flow,there is an urgent need to realize a decentralized ecosystem of private blockchain for cyberspace.A collaboration dilemma arises when the participants are self-interested and lack feedback of complete information.Traditional blockchains have similar faults,such as trustlessness,single-factor consensus,and heavily distributed ledger,preventing them from adapting to the heterogeneous and resource-constrained Internet of Things.In this paper,we develop the game-theoretic design of a two-sided rating with complete information feedback to stimulate collaborations for private blockchain.The design consists of an evolution strategy of the decision-making network and a computing power network for continuously verifiable proofs.We formulate the optimum rating and resource scheduling problems as two-stage iterative games between participants and leaders.We theoretically prove that the Stackelberg equilibrium exists and the group evolution is stable.Then,we propose a multi-stage evolution consensus with feedback on a block-accounting workload for metadata survival.To continuously validate a block,the metadata of the optimum rating,privacy,and proofs are extracted to store on a lightweight blockchain.Moreover,to increase resource utilization,surplus computing power is scheduled flexibly to enhance security by degrees.Finally,the evaluation results show the validity and efficiency of our model,thereby solving the collaboration dilemma in the private blockchain.
基金supported by the Natural Science Fund for Distinguished Young Scholars of Jiangsu Province under Grant BK20220067。
文摘With the support of Vehicle-to-Everything(V2X)technology and computing power networks,the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such as de-signalization.How to effectively manage autonomous vehicles for traffic control with high throughput at unsignalized intersections while ensuring safety has been a research hotspot.This paper proposes a collision-free autonomous vehicle scheduling framework based on edge-cloud computing power networks for unsignalized intersections where the lanes entering the intersections are undirectional,and designs an efficient communication system and protocol.First,by analyzing the collision point occupation time,this paper formulates an absolute value programming problem.Second,this problem is solved with low complexity by the Edge Intelligence Optimal Entry Time(EI-OET)algorithm based on edge-cloud computing power support.Then,the communication system and protocol are designed for the proposed scheduling scheme to realize efficient and low-latency vehicular communications.Finally,simulation experiments compare the proposed scheduling framework with directional and traditional traffic light scheduling mechanisms,and the experimental results demonstrate its high efficiency,low latency,and low complexity.
基金the support from U.S.Department of Energy through its Advanced Grid Modeling program,Exascale Computing Program(ECP)The Grid Modernization Laboratory Consortium(GMLC)+1 种基金Advanced Research Projects Agency-Energy(ARPA-E),The National Quantum Information Science Research Centers,Co-design Center for Quantum Advantage(C2QA)the Office of Advanced Scientific Computing Research(ASCR).
文摘With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.
基金supported by the National Natural Science Foundation of China(6147219261202004)+1 种基金the Special Fund for Fast Sharing of Science Paper in Net Era by CSTD(2013116)the Natural Science Fund of Higher Education of Jiangsu Province(14KJB520014)
文摘In order to lower the power consumption and improve the coefficient of resource utilization of current cloud computing systems, this paper proposes two resource pre-allocation algorithms based on the "shut down the redundant, turn on the demanded" strategy here. Firstly, a green cloud computing model is presented, abstracting the task scheduling problem to the virtual machine deployment issue with the virtualization technology. Secondly, the future workloads of system need to be predicted: a cubic exponential smoothing algorithm based on the conservative control(CESCC) strategy is proposed, combining with the current state and resource distribution of system, in order to calculate the demand of resources for the next period of task requests. Then, a multi-objective constrained optimization model of power consumption and a low-energy resource allocation algorithm based on probabilistic matching(RA-PM) are proposed. In order to reduce the power consumption further, the resource allocation algorithm based on the improved simulated annealing(RA-ISA) is designed with the improved simulated annealing algorithm. Experimental results show that the prediction and conservative control strategy make resource pre-allocation catch up with demands, and improve the efficiency of real-time response and the stability of the system. Both RA-PM and RA-ISA can activate fewer hosts, achieve better load balance among the set of high applicable hosts, maximize the utilization of resources, and greatly reduce the power consumption of cloud computing systems.
文摘Artificial intelligence(AI)has evolved at an unprecedented pace in recent years.This rapid advancement includes algorithmic breakthroughs,cross-disciplinary integration,and diverse applications—driven by growing computational power,massive datasets,and collaborative global research.This special issue of Emerging Artificial Intelligence Technologies and Applications was conceived to provide a platformfor cuttingedge AI research communication,developing novel methodologies,cross-domain applications,and critical advancements in addressing real-world challenges.Over the past months,we have witnessed a remarkable diversity of submissions,reflecting the global trend of AI innovation.Below,we synthesize the key insights from these works,highlighting their collective contribution to advancing AI’s theoretical frontiers and practical applications.
文摘With the growing demand for deep integration between computing power networks(CPNs)and energy systems(ESs),effective collaboration between these systems has become increasingly crucial.To facilitate such integration,this paper proposes an energy-computing integrated system(ECIS),which consists of a four-layer framework including a physical layer,a networked digital twin layer,a service layer,and a communication layer—each interdependent and playing a distinct role.The ECIS enables the global dynamic scheduling and optimisation of electric power and computing power resources.We provide a detailed overview of the functions and interactions within the four layers of the ECIS,discussing the potential of ECIS to enhance resource utilisation,support green and low-carbon development,and improve system flexibility.By fostering efficient collaboration between power and computing resources,the proposed four-layer framework of ECIS can significantly improve operational efficiency.Furthermore,we explore potential challenges in implementing ECIS and outline future research directions to address these challenges.
文摘This article investigates the dynamic relationship between technology and AI(artificial intelligence)and the role that societal requirements play in pushing AI research and adoption.Technology has advanced dramatically throughout the years,providing the groundwork for the rise of AI.AI systems have achieved incredible feats in various disciplines thanks to advancements in computer power,data availability,and complex algorithms.On the other hand,society’s needs for efficiency,enhanced healthcare,environmental sustainability,and personalized experiences have worked as powerful accelerators for AI’s progress.This article digs into how technology empowers AI and how societal needs dictate its progress,emphasizing their symbiotic relationship.The findings underline the significance of responsible AI research,which considers both technological prowess and ethical issues,to ensure that AI continues to serve the greater good.
文摘The International Union of Geological Sciences(IUGS)is evaluating whether there are additional geoscientific activities that would be beneficial in helping mitigate the impacts of tsunami.Public concerns about poor decisions and inaction,and advances in computing power and data mining call for new scientific approaches.Three fundamental requirements for mitigating impacts of natural hazards are defined.These are:(1)improvement of process-oriented understanding,(2)adequate monitoring and optimal use of data,and(3)generation of advice based on scientific,technical and socio-economic expertise.International leadership/coordination is also important.
基金supported in part by the U.S.National Science Foundation under Grant Nos.CCF-1551511 and CNS-1551262.
文摘Modern computer systems are increasingly bounded by the available or permissible power at multiple layers from individual components to data centers.To cope with this reality,it is necessary to understand how power bounds im-pact performance,especially for systems built from high-end nodes,each consisting of multiple power hungry components.Because placing an inappropriate power bound on a node or a component can lead to severe performance loss,coordinat-ing power allocation among nodes and components is mandatory to achieve desired performance given a total power bud-get.In this article,we describe the paradigm of power bounded high-performance computing,which considers coordinated power bound assignment to be a key factor in computer system performance analysis and optimization.We apply this paradigm to the problem of power coordination across multiple layers for both CPU and GPU computing.Using several case studies,we demonstrate how the principles of balanced power coordination can be applied and adapted to the inter-play of workloads,hardware technology,and the available total power for performance improvement.
文摘Computing power,algorithms and data are the core factors for the digital economy.According to figures from the Ministry of Industry and Information Technology,as of the end of June,China had built 10.85 million standard racks,and its intelligent computing power had reached 788 EFLOPS,an indicator of system speed equaling 1 quintillion floatingpoint calculations per second.China tops the global large language model count with 1,509 models having been released.
文摘This book elaborates on the megatrends in the evolution of Al technology,the breakthrough of computing power-driven computing systems,and how Al can empower life sciences,the internet of Things,autonomous driving,and more.It also systematically analyzes the problems and risks behind the rapid development of artificial intelligence,as well as the countermeasures of governments,enterprises,and scientific research communities at home and abroad.Finally,this book looks forward to the future technological development trends and innovation paths of the entire industry,the opportunities and challenges of China in it,the roles and responsibilities of industry,academia,and research in the fourth industrial revolution,talent training,and global scientific and technological exchanges in the Al era,etc.
文摘1 Introduction With rapid development in computing power and breakthroughs in deep learning,the concept of“foundation models”has been introduced into the AI community.Generally,foundation models are large models trained on massive data and can be easily adapted to different domains for various tasks.With specific prompts,foundation models can generate texts and images,or even animate scenarios based on the given descriptions.Due to powerful capabilities,there is a growing trend to build agents based on foundation models.In this paper,we conduct an investigation into agents empowered by the foundation models.
基金supported by the Natural Science Foundation of Hunan Province(No.2022JJ10066)the National Natural Science Foundation of China(Grant No.62272477).
文摘1 Introduction The perpetual demand for computational power in scientific computing incessantly propels high-performance computing(HPC)systems toward Zettascale computation and even beyond[1,2].Concurrently,the ascension of artificial intelligence(AI)has engendered a marked surge in computational requisites,doubling the required computation performance by every 3.4 months[3].The collective pursuit of ultra-high computational capability has positioned AI and scientific computing as the preeminent twin drivers of HPC.