Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of ...Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.展开更多
Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the ...Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the world.Given the lack of a consistent analysis for studying the locational factors of data centers and empirical deficiencies in longitudinal investigations on spatial dynamics of heterogeneous data centers,this paper develops a comprehensive analytical framework to examine the dynamic geographies and locational factors of techno-environmentally heterogeneous data centers across Chinese cities in the period of 2006–2021.First,we develop a“supply-demand-environment trinity”analytical framework as well as an accompanying evaluation indicator system with Chinese characteristics.Second,the dynamic geographies of data centers in Chinese cities over the last decades are characterized as spatial polarization in economically leading urban agglomerations alongside persistent interregional gaps across eastern,central,and western regions.Data centers present dual spatial expansion trajectories featuring outward radiation from eastern core urban agglomerations to adjacent peripheries and leapfrog diffusion to strategic central and western digital infrastructural hubs.Third,it is empirically verified that data center construction in Chinese cities over the last decades has been jointly influenced by supply-,demand-,and environment-side locational factors,echoing the efficacy of the trinity analytical framework.Overall,our findings demonstrate the temporal variance,contextual contingency,and attribute-based differentiation of locational factors underlying techno-environmentally heterogeneous data centers in Chinese cities.展开更多
The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmetho...The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmethods.The validated Re-Normalization Group(RNG)k-ε turbulence model was used to analyze airflow patterns within cabinet structures equipped with backplane air conditioning.Key findings reveal that server-generated thermal plumes induce hot air accumulation at the cabinet apex,creating a 0.8℃ temperature elevation at the top server’s inlet compared to the ideal situation(23℃).Strategic increases in backplane fan exhaust airflow rates reduce server 1’s inlet temperature from 26.1℃(0%redundancy case)to 23.1℃(40%redundancy case).Gradient exhaust strategies achieve equivalent server temperature performance to uniform exhaust distributions while requiring 25%less redundant airflow.This approach decreases the recirculation ratio from1.52%(uniformexhaust at 15%redundancy)to 0.57%(gradient exhaust at equivalent redundancy).Comparative analyses demonstrate divergent thermal behaviors:in bottom-server-absent configurations,gradient exhaust reduces top server inlet temperatures by 1.6℃vs.uniformexhaust,whereas top-serverabsent configurations exhibit a 1.8℃ temperature increase under gradient conditions.The blind plate implementation achieves a 0.4℃ top server temperature reduction compared to 15%-redundancy uniform exhaust systems without requiring additional airflow redundancy.Partially installed server arrangements with blind plates maintain thermal characteristics comparable to fully populated cabinets.This study validates gradient exhaust and blind plate technologies as effective countermeasures against cabinet-scale thermal recirculation,providing actionable insights for optimizing backplane air conditioning systems in mission-critical data center environments.展开更多
The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods t...The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.展开更多
With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggr...With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggregated SIDCs have emerged as promising demand response(DR)resources for future power distribution systems.This paper presents an innovative framework for assessing capacity value(CV)by aggregating SIDCs participating in DR programs(SIDC-DR).Initially,we delineate the concept of CV tailored for aggregated SIDC scenarios and establish a metric for the assessment.Considering the effects of the data load dynamics,equipment constraints,and user behavior,we developed a sophisticated DR model for aggregated SIDCs using a data network aggregation method.Unlike existing studies,the proposed model captures the uncertainties associated with end tenant decisions to opt into an SIDC-DR program by utilizing a novel uncertainty modeling approach called Z-number formulation.This approach accounts for both the uncertainty in user participation intentions and the reliability of basic information during the DR process,enabling high-resolution profiling of the SIDC-DR potential in the CV evaluation.Simulation results from numerical studies conducted on a modified IEEE-33 node distribution system confirmed the effectiveness of the proposed approach and highlighted the potential benefits of SIDC-DR utilization in the efficient operation of future power systems.展开更多
The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial car...The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.展开更多
National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chines...National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chinese Academy of Medical Sciences under the oversight of the National Health Commission,NPHDC adheres to national regulations including the Scientific Data Management Measures and the National Science and Technology Infrastructure Service Platform Management Measures,and is committed to collecting,integrating,managing,and sharing biomedical and health data through openaccess platform,fostering open sharing and engaging in international cooperation.展开更多
Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communi...Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communication has evolved into an increasingly prominent area of research in recent years.Here,we demonstrate DSP-free coherent optical transmission by analog signal processing in frequency synchronous optical network(FSON)architecture,which supports polarization multiplexing and higher-order modulation formats.The FSON architecture that allows the numerous laser sources of optical transceivers within a data center can be quasi-synchronized by means of a tree-distributed homology architecture.In conjunction with our proposed pilot-tone assisted Costas loop for an analog coherent receiver,we achieve a record dual-polarization 224-Gb/s 16-QAM 5-km mismatch transmission with reset-free carrier phase recovery in the optical domain.Our proposed DSP-free analog coherent detection system based on the FSON makes it a promising solution for next-generation,low-power,and high-capacity coherent data center interconnects.展开更多
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an i...The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an imbalanced network load.Consequently,persistent overload situations eventually result in network congestion.The Software Defined Network(SDN)technology is employed in data centers as a network architecture to enhance performance.This paper introduces an adaptive congestion control strategy,named DA-DCTCP,for SDN-based Data Centers.It incorporates Explicit Congestion Notification(ECN)and Round-Trip Time(RTT)to establish congestion awareness and an ECN marking model.To mitigate incorrect congestion caused by abrupt flows,an appropriate ECN marking is selected based on the queue length and its growth slope,and the congestion window(CWND)is adjusted by calculating RTT.Simultaneously,the marking threshold for queue length is continuously adapted using the current queue length of the switch as a parameter to accommodate changes in data centers.The evaluation conducted through Mininet simulations demonstrates that DA-DCTCP yields advantages in terms of throughput,flow completion time(FCT),latency,and resistance against packet loss.These benefits contribute to reducing data center congestion,enhancing the stability of data transmission,and improving throughput.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by...The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.展开更多
In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP i...In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP in those topologies as it can efficiently offer improved throughput and better fairness.However,we have found that MPTCP has a problem in terms of incast collapse where the receiver suffers a drastic goodput drop when it simultaneously requests data over multiple servers.In this paper,we investigate why the goodput collapses even if MPTCP is able to actively relieve hot spots.In order to address the problem,we propose an equally-weighted congestion control algorithm for MPTCP,namely EW-MPTCP,without need for centralized control,additional infrastructure and a hardware upgrade.In our scheme,in addition to the coupled congestion control performed on each subflow of an MPTCP connection,we allow each subflow to perform an additional congestion control operation by weighting the congestion window in reverse proportion to the number of servers.The goal is to mitigate incast collapse by allowing multiple MPTCP subflows to compete fairly with a single-TCP flow at the shared bottleneck.The simulation results show that our solution mitigates the incast problem and noticeably improves goodput in data centers.展开更多
With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers....With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.Howeve...With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows.展开更多
Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoele...Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs.展开更多
New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and hete...New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking(SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/Open Flow domains or multiple Open Flow/Generalized Multi-Protocol Label Switching( GMPLS) heterogeneous domains. I n addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-endservice provisioning in multi-domain data center optical networks.展开更多
Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling ...Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid.展开更多
An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid in...An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.92044303)。
文摘Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.
基金Major Program of National Social Science Foundation of China,No.21&ZD107。
文摘Data centers operate as physical digital infrastructure for generating,storing,computing,transmitting,and utilizing massive data and information,constituting the backbone of the flourishing digital economy across the world.Given the lack of a consistent analysis for studying the locational factors of data centers and empirical deficiencies in longitudinal investigations on spatial dynamics of heterogeneous data centers,this paper develops a comprehensive analytical framework to examine the dynamic geographies and locational factors of techno-environmentally heterogeneous data centers across Chinese cities in the period of 2006–2021.First,we develop a“supply-demand-environment trinity”analytical framework as well as an accompanying evaluation indicator system with Chinese characteristics.Second,the dynamic geographies of data centers in Chinese cities over the last decades are characterized as spatial polarization in economically leading urban agglomerations alongside persistent interregional gaps across eastern,central,and western regions.Data centers present dual spatial expansion trajectories featuring outward radiation from eastern core urban agglomerations to adjacent peripheries and leapfrog diffusion to strategic central and western digital infrastructural hubs.Third,it is empirically verified that data center construction in Chinese cities over the last decades has been jointly influenced by supply-,demand-,and environment-side locational factors,echoing the efficacy of the trinity analytical framework.Overall,our findings demonstrate the temporal variance,contextual contingency,and attribute-based differentiation of locational factors underlying techno-environmentally heterogeneous data centers in Chinese cities.
基金financially supported by the Basic Research Funds for the Central Government“Innovative Team of Zhejiang University”under contract number(2022FZZX01-09).
文摘The effect of gradient exhaust strategy and blind plate installation on the inhibition of backflow and thermal stratification in data center cabinets is systematically investigated in this study through numericalmethods.The validated Re-Normalization Group(RNG)k-ε turbulence model was used to analyze airflow patterns within cabinet structures equipped with backplane air conditioning.Key findings reveal that server-generated thermal plumes induce hot air accumulation at the cabinet apex,creating a 0.8℃ temperature elevation at the top server’s inlet compared to the ideal situation(23℃).Strategic increases in backplane fan exhaust airflow rates reduce server 1’s inlet temperature from 26.1℃(0%redundancy case)to 23.1℃(40%redundancy case).Gradient exhaust strategies achieve equivalent server temperature performance to uniform exhaust distributions while requiring 25%less redundant airflow.This approach decreases the recirculation ratio from1.52%(uniformexhaust at 15%redundancy)to 0.57%(gradient exhaust at equivalent redundancy).Comparative analyses demonstrate divergent thermal behaviors:in bottom-server-absent configurations,gradient exhaust reduces top server inlet temperatures by 1.6℃vs.uniformexhaust,whereas top-serverabsent configurations exhibit a 1.8℃ temperature increase under gradient conditions.The blind plate implementation achieves a 0.4℃ top server temperature reduction compared to 15%-redundancy uniform exhaust systems without requiring additional airflow redundancy.Partially installed server arrangements with blind plates maintain thermal characteristics comparable to fully populated cabinets.This study validates gradient exhaust and blind plate technologies as effective countermeasures against cabinet-scale thermal recirculation,providing actionable insights for optimizing backplane air conditioning systems in mission-critical data center environments.
基金funded by the Science and Technology Project of Tianjin(No.24YDTPJC00680)the National Natural Science Foundation of China(No.52406191).
文摘The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.
基金supported in part by the National Natural Science Foundation of China under Grant 52177082in part by the Beijing Nova Program under Grant 20220484007.
文摘With the advent of the digital economy,there has been a rapid proliferation of small-scale Internet data centers(SIDCs).By leveraging their spatiotemporal load regulation potential through data workload balancing,aggregated SIDCs have emerged as promising demand response(DR)resources for future power distribution systems.This paper presents an innovative framework for assessing capacity value(CV)by aggregating SIDCs participating in DR programs(SIDC-DR).Initially,we delineate the concept of CV tailored for aggregated SIDC scenarios and establish a metric for the assessment.Considering the effects of the data load dynamics,equipment constraints,and user behavior,we developed a sophisticated DR model for aggregated SIDCs using a data network aggregation method.Unlike existing studies,the proposed model captures the uncertainties associated with end tenant decisions to opt into an SIDC-DR program by utilizing a novel uncertainty modeling approach called Z-number formulation.This approach accounts for both the uncertainty in user participation intentions and the reliability of basic information during the DR process,enabling high-resolution profiling of the SIDC-DR potential in the CV evaluation.Simulation results from numerical studies conducted on a modified IEEE-33 node distribution system confirmed the effectiveness of the proposed approach and highlighted the potential benefits of SIDC-DR utilization in the efficient operation of future power systems.
基金supported by the Scientific&Technical Project of the State Grid(5700--202490228A--1--1-ZN).
文摘The rapid advancement of artificial intelligence(AI)has significantly increased the computational load on data centers.AI-related computational activities consume considerable electricity and result in substantial carbon emissions.To mitigate these emissions,future data centers should be strategically planned and operated to fully utilize renewable energy resources while meeting growing computational demands.This paper aims to investigate how much carbon emission reduction can be achieved by using a carbonoriented demand response to guide the optimal planning and operation of data centers.A carbon-oriented data center planning model is proposed that considers the carbon-oriented demand response of the AI load.In the planning model,future operation simulations comprehensively coordinate the temporal‒spatial flexibility of computational loads and the quality of service(QoS).An empirical study based on the proposed models is conducted on real-world data from China.The results from the empirical analysis show that newly constructed data centers are recommended to be built in Gansu Province,Ningxia Hui Autonomous Region,Sichuan Province,Inner Mongolia Autonomous Region,and Qinghai Province,accounting for 57%of the total national increase in server capacity.33%of the computational load from Eastern China should be transferred to the West,which could reduce the overall load carbon emissions by 26%.
文摘National Population Health Data Center(NPHDC)is one of China's 20 national-level science data centers,jointly designated by the Ministry of Science and Technology and the Ministry of Finance.Operated by the Chinese Academy of Medical Sciences under the oversight of the National Health Commission,NPHDC adheres to national regulations including the Scientific Data Management Measures and the National Science and Technology Infrastructure Service Platform Management Measures,and is committed to collecting,integrating,managing,and sharing biomedical and health data through openaccess platform,fostering open sharing and engaging in international cooperation.
基金supported by the National Natural Science Foundation of China(Grant Nos.62405250 and 62471404)the China Postdoctoral Science Foundation(Grant No.2024M762955)+1 种基金the Key Project of Westlake Institute for Optoelectronics(Grant No.2023GD003)the Optical Com-munication and Sensing Laboratory,School of Engineering,Westlake University.
文摘Propelled by the rise of artificial intelligence,cloud services,and data center applications,next-generation,low-power,local-oscillator-less,digital signal processing(DSP)-free,and short-reach coherent optical communication has evolved into an increasingly prominent area of research in recent years.Here,we demonstrate DSP-free coherent optical transmission by analog signal processing in frequency synchronous optical network(FSON)architecture,which supports polarization multiplexing and higher-order modulation formats.The FSON architecture that allows the numerous laser sources of optical transceivers within a data center can be quasi-synchronized by means of a tree-distributed homology architecture.In conjunction with our proposed pilot-tone assisted Costas loop for an analog coherent receiver,we achieve a record dual-polarization 224-Gb/s 16-QAM 5-km mismatch transmission with reset-free carrier phase recovery in the optical domain.Our proposed DSP-free analog coherent detection system based on the FSON makes it a promising solution for next-generation,low-power,and high-capacity coherent data center interconnects.
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
基金supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘The traffic within data centers exhibits bursts and unpredictable patterns.This rapid growth in network traffic has two consequences:it surpasses the inherent capacity of the network’s link bandwidth and creates an imbalanced network load.Consequently,persistent overload situations eventually result in network congestion.The Software Defined Network(SDN)technology is employed in data centers as a network architecture to enhance performance.This paper introduces an adaptive congestion control strategy,named DA-DCTCP,for SDN-based Data Centers.It incorporates Explicit Congestion Notification(ECN)and Round-Trip Time(RTT)to establish congestion awareness and an ECN marking model.To mitigate incorrect congestion caused by abrupt flows,an appropriate ECN marking is selected based on the queue length and its growth slope,and the congestion window(CWND)is adjusted by calculating RTT.Simultaneously,the marking threshold for queue length is continuously adapted using the current queue length of the switch as a parameter to accommodate changes in data centers.The evaluation conducted through Mininet simulations demonstrates that DA-DCTCP yields advantages in terms of throughput,flow completion time(FCT),latency,and resistance against packet loss.These benefits contribute to reducing data center congestion,enhancing the stability of data transmission,and improving throughput.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
基金This work was supported by Universiti Sains Malaysia under external grant(Grant Number 304/PNAV/650958/U154).
文摘The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.
基金supported in part by the HUT Distributed and Mobile Cloud Systems research project and Tekes within the ITEA2 project 10014 EASI-CLOUDS
文摘In recent years,dual-homed topologies have appeared in data centers in order to offer higher aggregate bandwidth by using multiple paths simultaneously.Multipath TCP(MPTCP) has been proposed as a replacement for TCP in those topologies as it can efficiently offer improved throughput and better fairness.However,we have found that MPTCP has a problem in terms of incast collapse where the receiver suffers a drastic goodput drop when it simultaneously requests data over multiple servers.In this paper,we investigate why the goodput collapses even if MPTCP is able to actively relieve hot spots.In order to address the problem,we propose an equally-weighted congestion control algorithm for MPTCP,namely EW-MPTCP,without need for centralized control,additional infrastructure and a hardware upgrade.In our scheme,in addition to the coupled congestion control performed on each subflow of an MPTCP connection,we allow each subflow to perform an additional congestion control operation by weighting the congestion window in reverse proportion to the number of servers.The goal is to mitigate incast collapse by allowing multiple MPTCP subflows to compete fairly with a single-TCP flow at the shared bottleneck.The simulation results show that our solution mitigates the incast problem and noticeably improves goodput in data centers.
基金supported by National Natural Science Foundation of China(61472042)Corporation Science and Technology Program of Global Energy Interconnection Group Ltd.(GEIGC-D-[2018]024)
文摘With the rapid development of technologies such as big data and cloud computing,data communication and data computing in the form of exponential growth have led to a large amount of energy consumption in data centers.Globally,data centers will become the world’s largest users of energy consumption,with the ratio rising from 3%in 2017 to 4.5%in 2025.Due to its unique climate and energy-saving advantages,the high-latitude area in the Pan-Arctic region has gradually become a hotspot for data center site selection in recent years.In order to predict and analyze the future energy consumption and carbon emissions of global data centers,this paper presents a new method based on global data center traffic and power usage effectiveness(PUE)for energy consumption prediction.Firstly,global data center traffic growth is predicted based on the Cisco’s research.Secondly,the dynamic global average PUE and the high latitude PUE based on Romonet simulation model are obtained,and then global data center energy consumption with two different scenarios,the decentralized scenario and the centralized scenario,is analyzed quantitatively via the polynomial fitting method.The simulation results show that,in 2030,the global data center energy consumption and carbon emissions are reduced by about 301 billion kWh and 720 million tons CO2 in the centralized scenario compared with that of the decentralized scenario,which confirms that the establishment of data centers in the Pan-Arctic region in the future can effectively relief the climate change and energy problems.This study provides support for global energy consumption prediction,and guidance for the layout of future global data centers from the perspective of energy consumption.Moreover,it provides support of the feasibility of the integration of energy and information networks under the Global Energy Interconnection conception.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金supported part by the National Natural Science Foundation of China(61601252,61801254)Public Technology Projects of Zhejiang Province(LG-G18F020007)+1 种基金Zhejiang Provincial Natural Science Foundation of China(LY20F020008,LY18F020011,LY20F010004)K.C.Wong Magna Fund in Ningbo University。
文摘With the emerging diverse applications in data centers,the demands on quality of service in data centers also become diverse,such as high throughput of elephant flows and low latency of deadline-sensitive flows.However,traditional TCPs are ill-suited to such situations and always result in the inefficiency(e.g.missing the flow deadline,inevitable throughput collapse)of data transfers.This further degrades the user-perceived quality of service(QoS)in data centers.To reduce the flow completion time of mice and deadline-sensitive flows along with promoting the throughput of elephant flows,an efficient and deadline-aware priority-driven congestion control(PCC)protocol,which grants mice and deadline-sensitive flows the highest priority,is proposed in this paper.Specifically,PCC computes the priority of different flows according to the size of transmitted data,the remaining data volume,and the flows’deadline.Then PCC adjusts the congestion window according to the flow priority and the degree of network congestion.Furthermore,switches in data centers control the input/output of packets based on the flow priority and the queue length.Different from existing TCPs,to speed up the data transfers of mice and deadline-sensitive flows,PCC provides an effective method to compute and encode the flow priority explicitly.According to the flow priority,switches can manage packets efficiently and ensure the data transfers of high priority flows through a weighted priority scheduling with minor modification.The experimental results prove that PCC can improve the data transfer performance of mice and deadline-sensitive flows while guaranting the throughput of elephant flows.
基金supported by the National Key Research and Development Program of China under Grant No.2016YFB 0402302the National Natural Science Foundation of China under Grant No.91433206。
文摘Global data traffic is growing rapidly,and the demand for optoelectronic transceivers applied in data centers(DCs)is also increasing correspondingly.In this review,we first briefly introduce the development of optoelectronics transceivers in DCs,as well as the advantages of silicon photonic chips fabricated by complementary metal oxide semiconductor process.We also summarize the research on the main components in silicon photonic transceivers.In particular,quantum dot lasers have shown great potential as light sources for silicon photonic integration—whether to adopt bonding method or monolithic integration—thanks to their unique advantages over the conventional quantum-well counterparts.Some of the solutions for highspeed optical interconnection in DCs are then discussed.Among them,wavelength division multiplexing and four-level pulseamplitude modulation have been widely studied and applied.At present,the application of coherent optical communication technology has moved from the backbone network,to the metro network,and then to DCs.
文摘New and emerging use cases, such as the interconnection of geographically distributed data centers(DCs), are drawing attention to the requirement for dynamic end-to-end service provisioning, spanning multiple and heterogeneous optical network domains. This heterogeneity is, not only due to the diverse data transmission and switching technologies, but also due to the different options of control plane techniques. In light of this, the problem of heterogeneous control plane interworking needs to be solved, and in particular, the solution must address the specific issues of multi-domain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints. In this article, some of the recent activities regarding the Software-Defined Networking(SDN) orchestration are reviewed to address such a multi-domain control plane interworking problem. Specifically, three different models, including the single SDN controller model, multiple SDN controllers in mesh, and multiple SDN controllers in a hierarchical setting, are presented for the DC interconnection network with multiple SDN/Open Flow domains or multiple Open Flow/Generalized Multi-Protocol Label Switching( GMPLS) heterogeneous domains. I n addition, two concrete implementations of the orchestration architectures are detailed, showing the overall feasibility and procedures of SDN orchestration for the end-to-endservice provisioning in multi-domain data center optical networks.
基金This work is financially supported by the Ministry of Research and Technology of Indonesia(BRIN)in the project called“Penggunaan Immersion Cooling untukMeningkatkan Efisiensi Energi Data Center”.
文摘Data centers are recognized as one of the most important aspects of the fourth industrial revolution since conventional data centers are inefficient and have dependency on high energy consumption,in which the cooling is responsible for 40%of the usage.Therefore,this research proposes the immersion cooling method to solving the high energy consumption of data centers by cooling its component using two types of dielectric fluids.Four stages of experimentalmethods are used,such as fluid types,cooling effectiveness,optimization,and durability.Furthermore,benchmark software is used to measure the CPU maximum work with the temperature data performed for 24 h.The results of this study show that the immersion cooling reduces 13℃ lower temperature than the conventional cooling method which means it saves more energy consumption in the data center.The most optimum variable used to decrease the temperature is 1.5 lpm of flow rate and 800 rpm of fan rotation.Furthermore,the cooling performance of the dielectric fluids shows that the mineral oil(MO)is better than the virgin coconut oil(VCO).In durability experiment,there are no components damage after five months immersed in the fluid.
基金Supported by the National High Technology Research and Development Program of China under Grant No 2015AA016902the National Natural Science Foundation of China under Grant Nos 61435013 and 61405188the K.C.Wong Education Foundation
文摘An 8×10 GHz receiver optical sub-assembly (ROSA) consisting of an 8-channel arrayed waveguide grating (AWG) and an 8-channel PIN photodetector (PD) array is designed and fabricated based on silica hybrid integration technology. Multimode output waveguides in the silica AWG with 2% refractive index difference are used to obtain fiat-top spectra. The output waveguide facet is polished to 45° bevel to change the light propagation direction into the mesa-type PIN PD, which simplifies the packaging process. The experimentM results show that the single channel I dB bandwidth of AWG ranges from 2.12nm to 3.06nm, the ROSA responsivity ranges from 0.097 A/W to 0.158A/W, and the 3dB bandwidth is up to 11 GHz. It is promising to be applied in the eight-lane WDM transmission system in data center interconnection.