Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of ...Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.展开更多
Most data centers currently tap into existing power grids to draw the immense amount of electricity they need to operate.But many of the data centers that Google(Mountain View,CA,USA)plans to open in the next few year...Most data centers currently tap into existing power grids to draw the immense amount of electricity they need to operate.But many of the data centers that Google(Mountain View,CA,USA)plans to open in the next few years will boast their own power plants,an arrangement known as colocation[1].Under an agreement announced in December 2024,the company will site data centers in industrial parks where its partner Intersect Power of Houston,TX,USA,has installed clean power facilities[1,2].The first of these complexes is scheduled to come online in 2026[1].展开更多
With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggr...With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggregated SIDCs have emerged as promising demand response(DR)resources for future power distribution systems.This paper presents an innovative framework for assessing capacity value(CV)by aggregating SIDCs participating in DR programs(SIDC-DR).Initially,we delineate the concept of CV tailored for aggregated SIDC scenarios and establish a metric for the assessment.Considering the effects of the data load dynamics,equipment constraints,and user behavior,we developed a sophisticated DR model for aggregated SIDCs using a data network aggregation method.Unlike existing studies,the proposed model captures the uncertainties associated with end tenant decisions to opt into an SIDC-DR program by utilizing a novel uncertainty modeling approach called Z-number formulation.This approach accounts for both the uncertainty in user participation intentions and the reliability of basic information during the DR process,enabling high-resolution profiling of the SIDC-DR potential in the CV evaluation.Simulation results from numerical studies conducted on a modified IEEE-33 node distribution system confirmed the effectiveness of the proposed approach and highlighted the potential benefits of SIDC-DR utilization in the efficient operation of future power systems.展开更多
National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chines...National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chinese Academy of Medical Sciences under the oversight of the National Health Commission,NPHDC adheres to national regulations including the Scientific Data Management Measures and the National Science and Technology Infrastructure Service Platform Management Measures,and is committed to collecting,integrating,managing,and sharing biomedical and health data through openaccess platform,fostering open sharing and engaging in international cooperation.展开更多
Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the ...Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the world.Given the lack of a consistent analysis for studying the locational factors of data centers and empirical deficiencies in longitudinal investigations on spatial dynamics of heterogeneous data centers,this paper develops a comprehensive analytical framework to examine the dynamic geographies and locational factors of techno-environmentally heterogeneous data centers across Chinese cities in the period of 2006–2021.First,we develop a“supply-demand-environment trinity”analytical framework as well as an accompanying evaluation indicator system with Chinese characteristics.Second,the dynamic geographies of data centers in Chinese cities over the last decades are characterized as spatial polarization in economically leading urban agglomerations alongside persistent interregional gaps across eastern,central,and western regions.Data centers present dual spatial expansion trajectories featuring outward radiation from eastern core urban agglomerations to adjacent peripheries and leapfrog diffusion to strategic central and western digital infrastructural hubs.Third,it is empirically verified that data center construction in Chinese cities over the last decades has been jointly influenced by supply-,demand-,and environment-side locational factors,echoing the efficacy of the trinity analytical framework.Overall,our findings demonstrate the temporal variance,contextual contingency,and attribute-based differentiation of locational factors underlying techno-environmentally heterogeneous data centers in Chinese cities.展开更多
As the world’s largest digital economy,China has a significant demand for data centers,which are energy-intensive.With an annual growth rate of 28%in installed capacity,these centers are primarily located in the deve...As the world’s largest digital economy,China has a significant demand for data centers,which are energy-intensive.With an annual growth rate of 28%in installed capacity,these centers are primarily located in the developed eastern region,where land and energy resources are limited.This localization poses a major challenge to the industry’s net-zero goal.To address this,China has launched a bold initiative to relocate data centers to the western region,leveraging natural cooling,clean energy,and cost-effective resources.By 2030,this move is expected to reduce emissions from the data center sector by 16%–20%,generating direct economic benefits of approximately 53 billion USD.The success of this initiative can serve as a model for other countries to develop their internet infrastructure.展开更多
The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmetho...The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmethods.The validated Re-Normalization Group(RNG)k-ε turbulence model was used to analyze airflow patterns within cabinet structures equipped with backplane air conditioning.Key findings reveal that server-generated thermal plumes induce hot air accumulation at the cabinet apex,creating a 0.8℃ temperature elevation at the top server’s inlet compared to the ideal situation(23℃).Strategic increases in backplane fan exhaust airflow rates reduce server 1’s inlet temperature from 26.1℃(0%redundancy case)to 23.1℃(40%redundancy case).Gradient exhaust strategies achieve equivalent server temperature performance to uniform exhaust distributions while requiring 25%less redundant airflow.This approach decreases the recirculation ratio from1.52%(uniformexhaust at 15%redundancy)to 0.57%(gradient exhaust at equivalent redundancy).Comparative analyses demonstrate divergent thermal behaviors:in bottom-server-absent configurations,gradient exhaust reduces top server inlet temperatures by 1.6℃vs.uniformexhaust,whereas top-serverabsent configurations exhibit a 1.8℃ temperature increase under gradient conditions.The blind plate implementation achieves a 0.4℃ top server temperature reduction compared to 15%-redundancy uniform exhaust systems without requiring additional airflow redundancy.Partially installed server arrangements with blind plates maintain thermal characteristics comparable to fully populated cabinets.This study validates gradient exhaust and blind plate technologies as effective countermeasures against cabinet-scale thermal recirculation,providing actionable insights for optimizing backplane air conditioning systems in mission-critical data center environments.展开更多
The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods t...The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.展开更多
To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in t...To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.展开更多
Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communi...Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communication has evolved into an increasingly prominent area of research in recent years.Here,we demonstrate DSP-free coherent optical transmission by analog signal processing in frequency synchronous optical network(FSON)architecture,which supports polarization multiplexing and higher-order modulation formats.The FSON architecture that allows the numerous laser sources of optical transceivers within a data center can be quasi-synchronized by means of a tree-distributed homology architecture.In conjunction with our proposed pilot-tone assisted Costas loop for an analog coherent receiver,we achieve a record dual-polarization 224-Gb/s 16-QAM 5-km mismatch transmission with reset-free carrier phase recovery in the optical domain.Our proposed DSP-free analog coherent detection system based on the FSON makes it a promising solution for next-generation,low-power,and high-capacity coherent data center interconnects.展开更多
The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial car...The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.展开更多
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an i...The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an imbalanced network load.Consequently,persistent overload situations eventually result in network congestion.The Software Defined Network(SDN)technology is employed in data centers as a network architecture to enhance performance.This paper introduces an adaptive congestion control strategy,named DA-DCTCP,for SDN-based Data Centers.It incorporates Explicit Congestion Notification(ECN)and Round-Trip Time(RTT)to establish congestion awareness and an ECN marking model.To mitigate incorrect congestion caused by abrupt flows,an appropriate ECN marking is selected based on the queue length and its growth slope,and the congestion window(CWND)is adjusted by calculating RTT.Simultaneously,the marking threshold for queue length is continuously adapted using the current queue length of the switch as a parameter to accommodate changes in data centers.The evaluation conducted through Mininet simulations demonstrates that DA-DCTCP yields advantages in terms of throughput,flow completion time(FCT),latency,and resistance against packet loss.These benefits contribute to reducing data center congestion,enhancing the stability of data transmission,and improving throughput.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
In September 2024,Constellation Energy of Baltimore,MD,USA,owner and operator of the Three Mile Island(TMI)nuclear power plant near Middletown,PA,USA,announced that it would reopen the plant’s recently shuttered Unit...In September 2024,Constellation Energy of Baltimore,MD,USA,owner and operator of the Three Mile Island(TMI)nuclear power plant near Middletown,PA,USA,announced that it would reopen the plant’s recently shuttered Unit 1 reactor to provide electricity for data centers owned by tech giant Microsoft(Redmond,WA,USA)[1-3].展开更多
According to the International energy Agency(IEA),a standard AI-oriented data center consumes as much electricity as 100,000 households.Data centers also use enormous amounts of water to build and cool electrical comp...According to the International energy Agency(IEA),a standard AI-oriented data center consumes as much electricity as 100,000 households.Data centers also use enormous amounts of water to build and cool electrical components and they also create a lot of electronic waste.展开更多
With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of sof...With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of software-defined network(SDN)technology has brought revolutionary changes to modern network management.This paper aims to discuss the application and prospects of SDN technology in modern network management.Firstly,the basic principle and architecture of SDN are introduced,including the separation of control plane and data plane,centralized control and open programmable interface.Then,it analyzes the advantages of SDN technology in network management,such as simplifying network configuration,improving network flexibility,optimizing network resource utilization,and realizing fast fault recovery.The application examples of SDN in data center networks and WAN optimization management are analyzed.This paper also discusses the development status and trend of SDN in enterprise networks,including the integration of technologies such as cloud computing,big data,and artificial intelligence,the construction of an intelligent and automated network management platform,the improvement of network management efficiency and quality,and the openness and interoperability of network equipment.Finally,the advantages and challenges of SDN technology are summarized,and its future development direction is provided.展开更多
The effects of centering response and explanatory variables as a way of simplifying fitted linear models in the presence of correlation are reviewed and extended to include nonlinear models, common in many biological ...The effects of centering response and explanatory variables as a way of simplifying fitted linear models in the presence of correlation are reviewed and extended to include nonlinear models, common in many biological and economic applications. In a nonlinear model, the use of a local approximation can modify the effect of centering. Even in the presence of uncorrelated explanatory variables, centering may affect linear approximations and related test statistics. An approach to assessing this effect in relation to intrinsic curvature is developed and applied. Mis-specification bias of linear versus nonlinear models also reflects this centering effect.展开更多
Rack-level loop thermosyphons have been widely adopted as a solution to data centers’growing energy demands.While numerous studies have highlighted the heat transfer performance and energy-saving benefits of this sys...Rack-level loop thermosyphons have been widely adopted as a solution to data centers’growing energy demands.While numerous studies have highlighted the heat transfer performance and energy-saving benefits of this system,its economic feasibility,water usage effectiveness(WUE),and carbon usage effectiveness(CUE)remain underexplored.This study introduces a comprehensive evaluation index designed to assess the applicability of the rack-level loop thermosyphon system across various computing hub nodes.The air wet bulb temperature Ta,w was identified as the most significant factor influencing the variability in the combination of PUE,CUE,and WUE values.The results indicate that the rack-level loop thermosyphon system achieves the highest score in Lanzhou(94.485)and the lowest in Beijing(89.261)based on the comprehensive evaluation index.The overall ranking of cities according to the comprehensive evaluation score is as follows:Gansu hub(Lanzhou)>Inner Mongolia hub(Hohhot)>Ningxia hub(Yinchuan)>Yangtze River Delta hub(Shanghai)>Chengdu Chongqing hub(Chongqing)>Guangdong-Hong Kong-Macao Greater Bay Area hub(Guangzhou)>Guizhou hub(Guiyang)>Beijing-Tianjin-Hebei hub(Beijing).Furthermore,Hohhot,Lanzhou,and Yinchuan consistently rank among the top three cities for comprehensive scores across all load rates,while Guiyang(at a 25%load rate),Guangzhou(at a 50%load rate),and Beijing(at 75%and 100%load rates)exhibited the lowest comprehensive scores.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.92044303)。
文摘Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.
文摘Most data centers currently tap into existing power grids to draw the immense amount of electricity they need to operate.But many of the data centers that Google(Mountain View,CA,USA)plans to open in the next few years will boast their own power plants,an arrangement known as colocation[1].Under an agreement announced in December 2024,the company will site data centers in industrial parks where its partner Intersect Power of Houston,TX,USA,has installed clean power facilities[1,2].The first of these complexes is scheduled to come online in 2026[1].
基金supported in part by the National Natural Science Foundation of China under Grant 52177082in part by the Beijing Nova Program under Grant 20220484007.
文摘With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggregated SIDCs have emerged as promising demand response(DR)resources for future power distribution systems.This paper presents an innovative framework for assessing capacity value(CV)by aggregating SIDCs participating in DR programs(SIDC-DR).Initially,we delineate the concept of CV tailored for aggregated SIDC scenarios and establish a metric for the assessment.Considering the effects of the data load dynamics,equipment constraints,and user behavior,we developed a sophisticated DR model for aggregated SIDCs using a data network aggregation method.Unlike existing studies,the proposed model captures the uncertainties associated with end tenant decisions to opt into an SIDC-DR program by utilizing a novel uncertainty modeling approach called Z-number formulation.This approach accounts for both the uncertainty in user participation intentions and the reliability of basic information during the DR process,enabling high-resolution profiling of the SIDC-DR potential in the CV evaluation.Simulation results from numerical studies conducted on a modified IEEE-33 node distribution system confirmed the effectiveness of the proposed approach and highlighted the potential benefits of SIDC-DR utilization in the efficient operation of future power systems.
文摘National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chinese Academy of Medical Sciences under the oversight of the National Health Commission,NPHDC adheres to national regulations including the Scientific Data Management Measures and the National Science and Technology Infrastructure Service Platform Management Measures,and is committed to collecting,integrating,managing,and sharing biomedical and health data through openaccess platform,fostering open sharing and engaging in international cooperation.
基金Major Program of National Social Science Foundation of China,No.21&ZD107。
文摘Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the world.Given the lack of a consistent analysis for studying the locational factors of data centers and empirical deficiencies in longitudinal investigations on spatial dynamics of heterogeneous data centers,this paper develops a comprehensive analytical framework to examine the dynamic geographies and locational factors of techno-environmentally heterogeneous data centers across Chinese cities in the period of 2006–2021.First,we develop a“supply-demand-environment trinity”analytical framework as well as an accompanying evaluation indicator system with Chinese characteristics.Second,the dynamic geographies of data centers in Chinese cities over the last decades are characterized as spatial polarization in economically leading urban agglomerations alongside persistent interregional gaps across eastern,central,and western regions.Data centers present dual spatial expansion trajectories featuring outward radiation from eastern core urban agglomerations to adjacent peripheries and leapfrog diffusion to strategic central and western digital infrastructural hubs.Third,it is empirically verified that data center construction in Chinese cities over the last decades has been jointly influenced by supply-,demand-,and environment-side locational factors,echoing the efficacy of the trinity analytical framework.Overall,our findings demonstrate the temporal variance,contextual contingency,and attribute-based differentiation of locational factors underlying techno-environmentally heterogeneous data centers in Chinese cities.
基金supported by the Joint Research Project for the Yangtze River Conservation(Phase II),China(2022-LHYJ-02-0401).
文摘As the world’s largest digital economy,China has a significant demand for data centers,which are energy-intensive.With an annual growth rate of 28%in installed capacity,these centers are primarily located in the developed eastern region,where land and energy resources are limited.This localization poses a major challenge to the industry’s net-zero goal.To address this,China has launched a bold initiative to relocate data centers to the western region,leveraging natural cooling,clean energy,and cost-effective resources.By 2030,this move is expected to reduce emissions from the data center sector by 16%–20%,generating direct economic benefits of approximately 53 billion USD.The success of this initiative can serve as a model for other countries to develop their internet infrastructure.
基金financially supported by the Basic Research Funds for the Central Government“Innovative Team of Zhejiang University”under contract number(2022FZZX01-09).
文摘The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmethods.The validated Re-Normalization Group(RNG)k-ε turbulence model was used to analyze airflow patterns within cabinet structures equipped with backplane air conditioning.Key findings reveal that server-generated thermal plumes induce hot air accumulation at the cabinet apex,creating a 0.8℃ temperature elevation at the top server’s inlet compared to the ideal situation(23℃).Strategic increases in backplane fan exhaust airflow rates reduce server 1’s inlet temperature from 26.1℃(0%redundancy case)to 23.1℃(40%redundancy case).Gradient exhaust strategies achieve equivalent server temperature performance to uniform exhaust distributions while requiring 25%less redundant airflow.This approach decreases the recirculation ratio from1.52%(uniformexhaust at 15%redundancy)to 0.57%(gradient exhaust at equivalent redundancy).Comparative analyses demonstrate divergent thermal behaviors:in bottom-server-absent configurations,gradient exhaust reduces top server inlet temperatures by 1.6℃vs.uniformexhaust,whereas top-serverabsent configurations exhibit a 1.8℃ temperature increase under gradient conditions.The blind plate implementation achieves a 0.4℃ top server temperature reduction compared to 15%-redundancy uniform exhaust systems without requiring additional airflow redundancy.Partially installed server arrangements with blind plates maintain thermal characteristics comparable to fully populated cabinets.This study validates gradient exhaust and blind plate technologies as effective countermeasures against cabinet-scale thermal recirculation,providing actionable insights for optimizing backplane air conditioning systems in mission-critical data center environments.
基金funded by the Science and Technology Project of Tianjin(No.24YDTPJC00680)the National Natural Science Foundation of China(No.52406191).
文摘The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.
基金supported by National Natural Science Foundation of China(No.62163036).
文摘To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.
基金supported by the National Natural Science Foundation of China(Grant Nos.62405250 and 62471404)the China Postdoctoral Science Foundation(Grant No.2024M762955)+1 种基金the Key Project of Westlake Institute for Optoelectronics(Grant No.2023GD003)the Optical Com-munication and Sensing Laboratory,School of Engineering,Westlake University.
文摘Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communication has evolved into an increasingly prominent area of research in recent years.Here,we demonstrate DSP-free coherent optical transmission by analog signal processing in frequency synchronous optical network(FSON)architecture,which supports polarization multiplexing and higher-order modulation formats.The FSON architecture that allows the numerous laser sources of optical transceivers within a data center can be quasi-synchronized by means of a tree-distributed homology architecture.In conjunction with our proposed pilot-tone assisted Costas loop for an analog coherent receiver,we achieve a record dual-polarization 224-Gb/s 16-QAM 5-km mismatch transmission with reset-free carrier phase recovery in the optical domain.Our proposed DSP-free analog coherent detection system based on the FSON makes it a promising solution for next-generation,low-power,and high-capacity coherent data center interconnects.
基金supported by the Scientific&Technical Project of the State Grid(5700--202490228A--1--1-ZN).
文摘The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
基金supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an imbalanced network load.Consequently,persistent overload situations eventually result in network congestion.The Software Defined Network(SDN)technology is employed in data centers as a network architecture to enhance performance.This paper introduces an adaptive congestion control strategy,named DA-DCTCP,for SDN-based Data Centers.It incorporates Explicit Congestion Notification(ECN)and Round-Trip Time(RTT)to establish congestion awareness and an ECN marking model.To mitigate incorrect congestion caused by abrupt flows,an appropriate ECN marking is selected based on the queue length and its growth slope,and the congestion window(CWND)is adjusted by calculating RTT.Simultaneously,the marking threshold for queue length is continuously adapted using the current queue length of the switch as a parameter to accommodate changes in data centers.The evaluation conducted through Mininet simulations demonstrates that DA-DCTCP yields advantages in terms of throughput,flow completion time(FCT),latency,and resistance against packet loss.These benefits contribute to reducing data center congestion,enhancing the stability of data transmission,and improving throughput.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
文摘In September 2024,Constellation Energy of Baltimore,MD,USA,owner and operator of the Three Mile Island(TMI)nuclear power plant near Middletown,PA,USA,announced that it would reopen the plant’s recently shuttered Unit 1 reactor to provide electricity for data centers owned by tech giant Microsoft(Redmond,WA,USA)[1-3].
文摘According to the International energy Agency(IEA),a standard AI-oriented data center consumes as much electricity as 100,000 households.Data centers also use enormous amounts of water to build and cool electrical components and they also create a lot of electronic waste.
文摘With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of software-defined network(SDN)technology has brought revolutionary changes to modern network management.This paper aims to discuss the application and prospects of SDN technology in modern network management.Firstly,the basic principle and architecture of SDN are introduced,including the separation of control plane and data plane,centralized control and open programmable interface.Then,it analyzes the advantages of SDN technology in network management,such as simplifying network configuration,improving network flexibility,optimizing network resource utilization,and realizing fast fault recovery.The application examples of SDN in data center networks and WAN optimization management are analyzed.This paper also discusses the development status and trend of SDN in enterprise networks,including the integration of technologies such as cloud computing,big data,and artificial intelligence,the construction of an intelligent and automated network management platform,the improvement of network management efficiency and quality,and the openness and interoperability of network equipment.Finally,the advantages and challenges of SDN technology are summarized,and its future development direction is provided.
文摘The effects of centering response and explanatory variables as a way of simplifying fitted linear models in the presence of correlation are reviewed and extended to include nonlinear models, common in many biological and economic applications. In a nonlinear model, the use of a local approximation can modify the effect of centering. Even in the presence of uncorrelated explanatory variables, centering may affect linear approximations and related test statistics. An approach to assessing this effect in relation to intrinsic curvature is developed and applied. Mis-specification bias of linear versus nonlinear models also reflects this centering effect.
基金supported by the Natural Science Foundation of Hunan Province,China(Grant Nos.2023JJ50178 and 2023JJ50194)the Excellent Youth Project of Hunan Provincial Department of Education(Grant No.23B0542).
文摘Rack-level loop thermosyphons have been widely adopted as a solution to data centers’growing energy demands.While numerous studies have highlighted the heat transfer performance and energy-saving benefits of this system,its economic feasibility,water usage effectiveness(WUE),and carbon usage effectiveness(CUE)remain underexplored.This study introduces a comprehensive evaluation index designed to assess the applicability of the rack-level loop thermosyphon system across various computing hub nodes.The air wet bulb temperature Ta,w was identified as the most significant factor influencing the variability in the combination of PUE,CUE,and WUE values.The results indicate that the rack-level loop thermosyphon system achieves the highest score in Lanzhou(94.485)and the lowest in Beijing(89.261)based on the comprehensive evaluation index.The overall ranking of cities according to the comprehensive evaluation score is as follows:Gansu hub(Lanzhou)>Inner Mongolia hub(Hohhot)>Ningxia hub(Yinchuan)>Yangtze River Delta hub(Shanghai)>Chengdu Chongqing hub(Chongqing)>Guangdong-Hong Kong-Macao Greater Bay Area hub(Guangzhou)>Guizhou hub(Guiyang)>Beijing-Tianjin-Hebei hub(Beijing).Furthermore,Hohhot,Lanzhou,and Yinchuan consistently rank among the top three cities for comprehensive scores across all load rates,while Guiyang(at a 25%load rate),Guangzhou(at a 50%load rate),and Beijing(at 75%and 100%load rates)exhibited the lowest comprehensive scores.