Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the b...Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.展开更多
The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the sca...The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by...The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.展开更多
Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon ...Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon by 2060,NEOM city has been proposed to be the“Saudi hub”for green energy,since NEOM is estimated to generate up to 120 Gigawatts(GW)of renewable energy by 2030.Nevertheless,the Information and Communication Technology(ICT)sector is considered a key contributor to global energy consumption and carbon emissions.The data centers are estimated to consume about 13%of the overall global electricity demand by 2030.Thus,reducing the total carbon emissions of the ICT sector plays a vital factor in achieving the Saudi plan to minimize global carbon emissions.Therefore,this paper aims to propose an eco-friendly approach using a Mixed-Integer Linear Programming(MILP)model to reduce the carbon emissions associated with ICT infrastructure in Saudi Arabia.This approach considers the Saudi National Fiber Network(SNFN)as the backbone of Saudi Internet infrastructure.First,we compare two different scenarios of data center locations.The first scenario considers a traditional cloud data center located in Jeddah and Riyadh,whereas the second scenario considers NEOM as a potential cloud data center new location to take advantage of its green energy infrastructure.Then,we calculate the energy consumption and carbon emissions of cloud data centers and their associated energy costs.After that,we optimize the energy efficiency of different cloud data centers’locations(in the SNFN)to reduce the associated carbon emissions and energy costs.Simulation results show that the proposed approach can save up to 94%of the carbon emissions and 62%of the energy cost compared to the current cloud physical topology.These savings are achieved due to the shifting of cloud data centers from cities that have conventional energy sources to a city that has rich in renewable energy sources.Finally,we design a heuristic algorithm to verify the proposed approach,and it gives equivalent results to the MILP model.展开更多
The important issues of network TCP congestion control are how to compute the link price according to the link status and regulate the data sending rate based on link congestion pricing feedback information.However,it...The important issues of network TCP congestion control are how to compute the link price according to the link status and regulate the data sending rate based on link congestion pricing feedback information.However,it is difficult to predict the congestion state of the link-end accurately at the source.In this paper,we presented an improved NUMFabric algorithm for calculating the overall congestion price.In the proposed scheme,the whole network structure had been obtained by the central control server in the Software Defined Network,and a kind of dual-hierarchy algorithm for calculating overall network congestion price had been demonstrated.In this scheme,the first hierarchy algorithm was set up in a central control server like Opendaylight and the guiding parameter B is obtained based on the intelligent data of global link state information.Based on the historical data,the congestion state of the network and the guiding parameter B is accurately predicted by the machine learning algorithm.The second hierarchy algorithm was installed in the Openflow link and the link price was calculated based on guiding parameter B given by the first algorithm.We evaluate this evolved NUMFabric algorithm in NS3,which demonstrated that the proposed NUMFabric algorithm could efficiently increase the link bandwidth utilization of cloud computing IoT datacenters.展开更多
In Cloud computing, data and service requests are responded by remote processes calls on huge data server clusters that are not totally trusted. The new computing pattern may cause many potential security threats. Thi...In Cloud computing, data and service requests are responded by remote processes calls on huge data server clusters that are not totally trusted. The new computing pattern may cause many potential security threats. This paper explores how to ensure the integrity and correctness of data storage in cloud computing with user's key pair. In this paper, we aim mainly at constructing of a quick data chunk verifying scheme to maintain data in data center by implementing a balance strategy of cloud computing costs, removing the heavy computing load of clients, and applying an automatic data integrity maintenance method. In our scheme, third party auditor (TPA) is kept in the scheme, for the sake of the client, to periodically check the integrity of data blocks stored in data center. Our scheme supports quick public data integrity verification and chunk redundancy strategy. Compared with the existing scheme, it takes the advantage of ocean data support and high performance.展开更多
As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the ...As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the mass and variety of data.The cloud computing features are capable of processing,managing,and storing all sorts of data.Although data is stored in many high-end nodes,either in the same data centers or across many data centers in cloud,performance issues are still inevitable.The cloud replication strategy is one of best solutions to address risk of performance degradation in the cloud environment.The real challenge here is developing the right data replication strategy with minimal data movement that guarantees efficient network usage,low fault tolerance,and minimal replication frequency.The key problem discussed in this research is inefficient network usage discovered during selecting a suitable data center to store replica copies induced by inadequate data center selection criteria.Hence,to mitigate the issue,we proposed Replication Strategy with a comprehensive Data Center Selection Method(RS-DCSM),which can determine the appropriate data center to place replicas by considering three key factors:Popularity,space availability,and centrality.The proposed RS-DCSM was simulated using CloudSim and the results proved that data movement between data centers is significantly reduced by 14%reduction in overall replication frequency and 20%decrement in network usage,which outperformed the current replication strategy,known as Dynamic Popularity aware Replication Strategy(DPRS)algorithm.展开更多
This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The v...This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.展开更多
A smart grid is the evolved form of the power grid with the integration of sensing,communication,computing,monitoring,and control technologies.These technologies make the power grid reliable,efficient,and economical.H...A smart grid is the evolved form of the power grid with the integration of sensing,communication,computing,monitoring,and control technologies.These technologies make the power grid reliable,efficient,and economical.However,the smartness boosts the volume of data in the smart grid.To obligate full benefits,big data has attractive techniques to process and analyze smart grid data.This paper presents and simulates a framework to make sure the use of big data computing technique in the smart grid.The offered framework comprises of the following four layers:(i)Data source layer,(ii)Data transmission layer,(iii)Data storage and computing layer,and(iv)Data analysis layer.As a proof of concept,the framework is simulated by taking the dataset of three cities of the Pakistan region and by considering two cloud-based data centers.The results are analyzed by taking into account the following parameters:(i)Heavy load data center,(ii)The impact of peak hour,(iii)High network delay,and(iv)The low network delay.The presented framework may help the power grid to achieve reliability,sustainability,and cost-efficiency for both the users and service providers.展开更多
In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The appro...In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The approach of using the SDN technology in the current task scheduling flexibility, accomplish real-time monitoring of the service node flow and load condition by the OpenFlow protocol. When the load of system is imbalanced, the controller can allocate globally network resources. What's more, by using dynamic correction, the load of the system is not obvious tilt in the long run. The results of simulation show that this approach can realize and ensure that the load will not tilt over a long period of time, and improve the system throughput.展开更多
Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial....Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.展开更多
The fast technology development of 5G mobile broadband (5G), Internet of Things (IoT), Big Data Analytics (Big Data), Cloud Computing (Cloud) and Software Defined Networks (SDN) has made those technologies one after a...The fast technology development of 5G mobile broadband (5G), Internet of Things (IoT), Big Data Analytics (Big Data), Cloud Computing (Cloud) and Software Defined Networks (SDN) has made those technologies one after another and created strong interdependence among one another. For example, IoT applications that generate small data with large volume and fast velocity will need 5G with characteristics of high data rate and low latency to transmit such data faster and cheaper. On the other hand, those data also need Cloud to process and to store and furthermore, SDN to provide scalable network infrastructure to transport this large volume of data in an optimal way. This article explores the technical relationships among the development of IoT, Big Data, Cloud, and SDN in the coming 5G era and illustrates several ongoing programs and applications at National Chiao Tung University that are based on the converging of those technologies.展开更多
Cloud computing is a new vision about the needs of information technology (IT). It provides a comprehensive concept for building a homogeneous environment through services offered in the cloud Software-as-a-Service ...Cloud computing is a new vision about the needs of information technology (IT). It provides a comprehensive concept for building a homogeneous environment through services offered in the cloud Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). Cloud computing is location-independent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. Cloud computing is computing paradigm that is driven by economies of scale, in which a set of dynamically-scalable resources such as servers, storages, platforms, and services are delivered on demand to the customers over the interuet. "Cloud computing is a continuation of the direction the industry has been going for the last several years in terms of using shared and elastically scalable computing resources," says Rex Wang1, VP of Product Marketing at Oracle, who spoke at the Gartner Data Center Conference, in January 2011. Cloud computing refers to dynamic provision of virtual distributed computational resources on demand via a computer network. Cloud computing is a new high technology industry that possesses a number of advantages over existing business practices: a reduction of expenses, technical staff, and efforts of the end users.展开更多
The convergence of Internet of things(IoT)and 5G holds immense potential for transforming industries by enabling real-time,massive-scale connectivity and automation.However,the growing number of devices connected to t...The convergence of Internet of things(IoT)and 5G holds immense potential for transforming industries by enabling real-time,massive-scale connectivity and automation.However,the growing number of devices connected to the IoT systems demands a communication network capable of handling vast amounts of data with minimal delay.These generated enormous complex,high-dimensional,high-volume,and high-speed data also brings challenges on its storage,transmission,processing,and energy cost,due to the limited computing capabilities,battery capacity,memory,and energy utilization of current IoT networks.In this paper,a seamless architecture by combining mobile and cloud computing is proposed.It can agilely bargain with 5G-IoT devices,sensor nodes,and mobile computing in a distributed manner,enabling minimized energy cost,high interoperability,and high scalability as well as overcoming the memory constraints.An artificial intelligence(AI)-powered green and energy-efficient architecture is then proposed for 5G-IoT systems and sustainable smart cities.The experimental results reveal that the proposed approach dramatically reduces the transmitted data volume and power consumption and yields superior results regarding interoperability,compression ratio,and energy saving.This is especially critical in enabling the deployment of 5G and even 6G wireless systems for smart cities.展开更多
随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包...随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包括主供电、备用电源和不间断电源(Uninterruptible Power Supply,UPS)3个模块,并在相应模块中集成了智能配电管理系统、云平台能量管理系统、智能UPS能效管理系统。该研究为云计算数据中心的稳定运行和可持续发展提供了有力支撑。展开更多
为了提升数据中心电源管理系统的响应速度与能效管理水平,以某互联网数据中心(Internet Data Center,IDC)机房改造项目为例,设计并实现了一套融合5G的智能IDC电源管理系统。通过构建感知层、网络层、平台层和应用层的分层架构,结合边缘...为了提升数据中心电源管理系统的响应速度与能效管理水平,以某互联网数据中心(Internet Data Center,IDC)机房改造项目为例,设计并实现了一套融合5G的智能IDC电源管理系统。通过构建感知层、网络层、平台层和应用层的分层架构,结合边缘计算与微服务技术,系统实现了能耗数据采集时间粒度、控制响应时延、平均故障发现时间、应急负载切换时间的大幅度缩短,同时在故障预警、应急调度等方面展现出显著优势。研究结果表明,该系统有效提升了电源管理的智能化、精细化与实时化水平,为绿色数据中心建设提供了可行路径。展开更多
全生命周期管理是现代工程技术领域中的重要管控理念。互联网数据中心(Internet Data Center,IDC)通信机房为数据与网络的核心枢纽,在进行机电安装时涉及供配电、环境控制、安全与布线等多系统的协同。基于此,分析全生命周期视角下IDC...全生命周期管理是现代工程技术领域中的重要管控理念。互联网数据中心(Internet Data Center,IDC)通信机房为数据与网络的核心枢纽,在进行机电安装时涉及供配电、环境控制、安全与布线等多系统的协同。基于此,分析全生命周期视角下IDC通信机房机电安装面临的复杂问题,研究电力系统安装与监控、环境控制系统安装、安全与网络系统安装、综合布线与设备布局的具体方法,并结合实际案例开展实证分析,旨在形成覆盖全流程的机电安装技术框架,为机房运行节能化和稳定化提供参考。展开更多
文摘Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.
文摘The increase in computing capacity caused a rapid and sudden increase in the Operational Expenses (OPEX) of data centers. OPEX reduction is a big concern and a key target in modern data centers. In this study, the scalability of the Dynamic Voltage and Frequency Scaling (DVFS) power management technique is studied under multiple different workloads. The environment of this study is a 3-Tier data center. We conducted multiple experiments to find the impact of using DVFS on energy reduction under two scheduling techniques, namely: Round Robin and Green. We observed that the amount of energy reduction varies according to data center load. When the data center load increases, the energy reduction decreases. Experiments using Green scheduler showed around 83% decrease in power consumption when DVFS is enabled and DC is lightly loaded. In case the DC is fully loaded, in which case the servers’ CPUs are constantly busy with no idle time, the effect of DVFS decreases and stabilizes to less than 10%. Experiments using Round Robin scheduler showed less energy saving by DVFS, specifically, around 25% in light DC load and less than 5% in heavy DC load. In order to find the effect of task weight on energy consumption, a set of experiments were conducted through applying thin and fat tasks. A thin task has much less instructions compared to fat tasks. We observed, through the simulation, that the difference in power reduction between both types of tasks when using DVFS is less than 1%.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金This work was supported by Universiti Sains Malaysia under external grant(Grant Number 304/PNAV/650958/U154).
文摘The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector.Cloud data center selection challenges are compounded by ever-increasing users’requests and the number of data centers required to execute these requests.Cloud service broker policy defines cloud data center’s selection,which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution.Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness,and it is well suited for selecting an appropriate cloud data center.This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment.The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers.The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator.The results are compared with the state-of-arts cloud service broker policies.
文摘Based on the Saudi Green initiative,which aims to improve the Kingdom’s environmental status and reduce the carbon emission of more than 278 million tons by 2030 along with a promising plan to achieve netzero carbon by 2060,NEOM city has been proposed to be the“Saudi hub”for green energy,since NEOM is estimated to generate up to 120 Gigawatts(GW)of renewable energy by 2030.Nevertheless,the Information and Communication Technology(ICT)sector is considered a key contributor to global energy consumption and carbon emissions.The data centers are estimated to consume about 13%of the overall global electricity demand by 2030.Thus,reducing the total carbon emissions of the ICT sector plays a vital factor in achieving the Saudi plan to minimize global carbon emissions.Therefore,this paper aims to propose an eco-friendly approach using a Mixed-Integer Linear Programming(MILP)model to reduce the carbon emissions associated with ICT infrastructure in Saudi Arabia.This approach considers the Saudi National Fiber Network(SNFN)as the backbone of Saudi Internet infrastructure.First,we compare two different scenarios of data center locations.The first scenario considers a traditional cloud data center located in Jeddah and Riyadh,whereas the second scenario considers NEOM as a potential cloud data center new location to take advantage of its green energy infrastructure.Then,we calculate the energy consumption and carbon emissions of cloud data centers and their associated energy costs.After that,we optimize the energy efficiency of different cloud data centers’locations(in the SNFN)to reduce the associated carbon emissions and energy costs.Simulation results show that the proposed approach can save up to 94%of the carbon emissions and 62%of the energy cost compared to the current cloud physical topology.These savings are achieved due to the shifting of cloud data centers from cities that have conventional energy sources to a city that has rich in renewable energy sources.Finally,we design a heuristic algorithm to verify the proposed approach,and it gives equivalent results to the MILP model.
基金supported by National Key R&D Program of China—Industrial Internet Application Demonstration-Sub-topic Intelligent Network Operation and Security Protection(2018YFB1802400).
文摘The important issues of network TCP congestion control are how to compute the link price according to the link status and regulate the data sending rate based on link congestion pricing feedback information.However,it is difficult to predict the congestion state of the link-end accurately at the source.In this paper,we presented an improved NUMFabric algorithm for calculating the overall congestion price.In the proposed scheme,the whole network structure had been obtained by the central control server in the Software Defined Network,and a kind of dual-hierarchy algorithm for calculating overall network congestion price had been demonstrated.In this scheme,the first hierarchy algorithm was set up in a central control server like Opendaylight and the guiding parameter B is obtained based on the intelligent data of global link state information.Based on the historical data,the congestion state of the network and the guiding parameter B is accurately predicted by the machine learning algorithm.The second hierarchy algorithm was installed in the Openflow link and the link price was calculated based on guiding parameter B given by the first algorithm.We evaluate this evolved NUMFabric algorithm in NS3,which demonstrated that the proposed NUMFabric algorithm could efficiently increase the link bandwidth utilization of cloud computing IoT datacenters.
基金Supported by the National Natural Science Foundation of China (60633020, 60573036)the Fundamental Funding Research Project of the Engineering College of APF (WJY 201026)
文摘In Cloud computing, data and service requests are responded by remote processes calls on huge data server clusters that are not totally trusted. The new computing pattern may cause many potential security threats. This paper explores how to ensure the integrity and correctness of data storage in cloud computing with user's key pair. In this paper, we aim mainly at constructing of a quick data chunk verifying scheme to maintain data in data center by implementing a balance strategy of cloud computing costs, removing the heavy computing load of clients, and applying an automatic data integrity maintenance method. In our scheme, third party auditor (TPA) is kept in the scheme, for the sake of the client, to periodically check the integrity of data blocks stored in data center. Our scheme supports quick public data integrity verification and chunk redundancy strategy. Compared with the existing scheme, it takes the advantage of ocean data support and high performance.
基金supported by Universiti Putra Malaysia and the Ministry of Education(MOE).
文摘As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the mass and variety of data.The cloud computing features are capable of processing,managing,and storing all sorts of data.Although data is stored in many high-end nodes,either in the same data centers or across many data centers in cloud,performance issues are still inevitable.The cloud replication strategy is one of best solutions to address risk of performance degradation in the cloud environment.The real challenge here is developing the right data replication strategy with minimal data movement that guarantees efficient network usage,low fault tolerance,and minimal replication frequency.The key problem discussed in this research is inefficient network usage discovered during selecting a suitable data center to store replica copies induced by inadequate data center selection criteria.Hence,to mitigate the issue,we proposed Replication Strategy with a comprehensive Data Center Selection Method(RS-DCSM),which can determine the appropriate data center to place replicas by considering three key factors:Popularity,space availability,and centrality.The proposed RS-DCSM was simulated using CloudSim and the results proved that data movement between data centers is significantly reduced by 14%reduction in overall replication frequency and 20%decrement in network usage,which outperformed the current replication strategy,known as Dynamic Popularity aware Replication Strategy(DPRS)algorithm.
文摘This paper investigates autonomic cloud data center networks, which is the solution with the increasingly complex computing environment, in terms of the management and cost issues to meet users’ growing demand. The virtualized cloud networking is to provide a plethora of rich online applications, including self-configuration, self-healing, self-optimization and self-protection. In addition, we draw on the intelligent subject and multi-agent system, concerning system model, strategy, autonomic cloud computing, involving independent computing system development and implementation. Then, combining the architecture with the autonomous unit, we propose the MCDN (Model of Autonomic Cloud Data Center Networks). This model can define intelligent state, elaborate the composition structure, and complete life cycle. Finally, our proposed public infrastructure can be provided with the autonomous unit in the supported interaction model.
基金This work was supported by the National Natural Science Foundation of China(61871058).
文摘A smart grid is the evolved form of the power grid with the integration of sensing,communication,computing,monitoring,and control technologies.These technologies make the power grid reliable,efficient,and economical.However,the smartness boosts the volume of data in the smart grid.To obligate full benefits,big data has attractive techniques to process and analyze smart grid data.This paper presents and simulates a framework to make sure the use of big data computing technique in the smart grid.The offered framework comprises of the following four layers:(i)Data source layer,(ii)Data transmission layer,(iii)Data storage and computing layer,and(iv)Data analysis layer.As a proof of concept,the framework is simulated by taking the dataset of three cities of the Pakistan region and by considering two cloud-based data centers.The results are analyzed by taking into account the following parameters:(i)Heavy load data center,(ii)The impact of peak hour,(iii)High network delay,and(iv)The low network delay.The presented framework may help the power grid to achieve reliability,sustainability,and cost-efficiency for both the users and service providers.
基金supported by the National Natural Science Foundation of China(No.61163058No.61201250 and No.61363006)Guangxi Key Laboratory of Trusted Software(No.KX201306)
文摘In order to balancing based on data achieve dynamic load flow level, in this paper, we apply SDN technology to the cloud data center, and propose a dynamic load balancing method of cloud center based on SDN. The approach of using the SDN technology in the current task scheduling flexibility, accomplish real-time monitoring of the service node flow and load condition by the OpenFlow protocol. When the load of system is imbalanced, the controller can allocate globally network resources. What's more, by using dynamic correction, the load of the system is not obvious tilt in the long run. The results of simulation show that this approach can realize and ensure that the load will not tilt over a long period of time, and improve the system throughput.
基金supported by Beijing Natural Science Foundation (4174100)NSFC(61602054)the Fundamental Research Funds for the Central Universities
文摘Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.
文摘The fast technology development of 5G mobile broadband (5G), Internet of Things (IoT), Big Data Analytics (Big Data), Cloud Computing (Cloud) and Software Defined Networks (SDN) has made those technologies one after another and created strong interdependence among one another. For example, IoT applications that generate small data with large volume and fast velocity will need 5G with characteristics of high data rate and low latency to transmit such data faster and cheaper. On the other hand, those data also need Cloud to process and to store and furthermore, SDN to provide scalable network infrastructure to transport this large volume of data in an optimal way. This article explores the technical relationships among the development of IoT, Big Data, Cloud, and SDN in the coming 5G era and illustrates several ongoing programs and applications at National Chiao Tung University that are based on the converging of those technologies.
文摘Cloud computing is a new vision about the needs of information technology (IT). It provides a comprehensive concept for building a homogeneous environment through services offered in the cloud Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). Cloud computing is location-independent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. Cloud computing is computing paradigm that is driven by economies of scale, in which a set of dynamically-scalable resources such as servers, storages, platforms, and services are delivered on demand to the customers over the interuet. "Cloud computing is a continuation of the direction the industry has been going for the last several years in terms of using shared and elastically scalable computing resources," says Rex Wang1, VP of Product Marketing at Oracle, who spoke at the Gartner Data Center Conference, in January 2011. Cloud computing refers to dynamic provision of virtual distributed computational resources on demand via a computer network. Cloud computing is a new high technology industry that possesses a number of advantages over existing business practices: a reduction of expenses, technical staff, and efforts of the end users.
文摘The convergence of Internet of things(IoT)and 5G holds immense potential for transforming industries by enabling real-time,massive-scale connectivity and automation.However,the growing number of devices connected to the IoT systems demands a communication network capable of handling vast amounts of data with minimal delay.These generated enormous complex,high-dimensional,high-volume,and high-speed data also brings challenges on its storage,transmission,processing,and energy cost,due to the limited computing capabilities,battery capacity,memory,and energy utilization of current IoT networks.In this paper,a seamless architecture by combining mobile and cloud computing is proposed.It can agilely bargain with 5G-IoT devices,sensor nodes,and mobile computing in a distributed manner,enabling minimized energy cost,high interoperability,and high scalability as well as overcoming the memory constraints.An artificial intelligence(AI)-powered green and energy-efficient architecture is then proposed for 5G-IoT systems and sustainable smart cities.The experimental results reveal that the proposed approach dramatically reduces the transmitted data volume and power consumption and yields superior results regarding interoperability,compression ratio,and energy saving.This is especially critical in enabling the deployment of 5G and even 6G wireless systems for smart cities.
文摘随着云计算技术的快速发展,传统通信互联网数据中心(Internet Data Center,IDC)机房供电系统已难以满足其高可靠性、高效率、灵活性的需求。文章针对云计算数据中心的特点,提出了一种创新的通信IDC机房供电电源系统设计方案。该方案包括主供电、备用电源和不间断电源(Uninterruptible Power Supply,UPS)3个模块,并在相应模块中集成了智能配电管理系统、云平台能量管理系统、智能UPS能效管理系统。该研究为云计算数据中心的稳定运行和可持续发展提供了有力支撑。
文摘为了提升数据中心电源管理系统的响应速度与能效管理水平,以某互联网数据中心(Internet Data Center,IDC)机房改造项目为例,设计并实现了一套融合5G的智能IDC电源管理系统。通过构建感知层、网络层、平台层和应用层的分层架构,结合边缘计算与微服务技术,系统实现了能耗数据采集时间粒度、控制响应时延、平均故障发现时间、应急负载切换时间的大幅度缩短,同时在故障预警、应急调度等方面展现出显著优势。研究结果表明,该系统有效提升了电源管理的智能化、精细化与实时化水平,为绿色数据中心建设提供了可行路径。
文摘全生命周期管理是现代工程技术领域中的重要管控理念。互联网数据中心(Internet Data Center,IDC)通信机房为数据与网络的核心枢纽,在进行机电安装时涉及供配电、环境控制、安全与布线等多系统的协同。基于此,分析全生命周期视角下IDC通信机房机电安装面临的复杂问题,研究电力系统安装与监控、环境控制系统安装、安全与网络系统安装、综合布线与设备布局的具体方法,并结合实际案例开展实证分析,旨在形成覆盖全流程的机电安装技术框架,为机房运行节能化和稳定化提供参考。