Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study...Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consider...One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Studies to enhance the management of electrical energy have gained considerable momentum in recent years. The question of how much energy will be needed in households is a pressing issue as it allows the management pl...Studies to enhance the management of electrical energy have gained considerable momentum in recent years. The question of how much energy will be needed in households is a pressing issue as it allows the management plan of the available resources at the power grids and consumer levels. A non-intrusive inference process can be adopted to predict the amount of energy required by appliances. In this study, an inference process of appliance consumption based on temporal and environmental factors used as a soft sensor is proposed. First, a study of the correlation between the electrical and environmental variables is presented. Then, a resampling process is applied to the initial data set to generate three other subsets of data. All the subsets were evaluated to deduce the adequate granularity for the prediction of the energy demand. Then, a cloud-assisted deep neural network model is designed to forecast short-term energy consumption in a residential area while preserving user privacy. The solution is applied to the consumption data of four appliances elected from a set of real household power data. The experiment results show that the proposed framework is effective for estimating consumption with convincing accuracy.展开更多
Kubernetes容器云是当前流行的云计算技术,其默认的弹性伸缩方法HPA(Horizontal Pod Autoscaler)能对云原生应用进行横向扩缩容。但该方法存在以下问题:基于单一负载指标,使其难以适用于多样化云原生应用;基于当前负载进行弹性伸缩,使...Kubernetes容器云是当前流行的云计算技术,其默认的弹性伸缩方法HPA(Horizontal Pod Autoscaler)能对云原生应用进行横向扩缩容。但该方法存在以下问题:基于单一负载指标,使其难以适用于多样化云原生应用;基于当前负载进行弹性伸缩,使扩缩容过程具有明显的滞后性;基于滑动时间窗口算法进行弹性缩容,使缩容过程缓慢易造成系统资源浪费。针对上述问题,文中提出一种改进的弹性伸缩方法。设计一种动态加权融合算法将多种负载指标融合为综合负载因子,全面反映云原生应用的综合负载。提出CEEMDAN(Complete Ensemble Empirical Mode Decomposition with Adaptive Noise)-ARIMA(Autoregressive Integrated Moving Average Model)预测模型,基于该模型的预测负载值实现预先弹性伸缩以应对突发流量。提出快速缩容与滑动时间窗口相结合的方法,在确保应用服务质量的基础上减少系统资源浪费。实验结果表明,相较于HPA机制,改进的弹性伸缩方法在应对首次突发流量时的平均响应时间缩短了336.55%,流量结束后系统资源占用减少了50%,再次遇到突发流量时能迅速扩容,平均响应时间缩短66.83%。展开更多
在信息化蓬勃发展的今日,大量云计算资源的高效管理是运维领域的重要难题。准确的负载预测是应对这一难题的关键技术。针对该问题提出一种基于局部加权回归周期趋势分解算法(Seasonal and Trend decomposition using Loess,STL)、Holt-W...在信息化蓬勃发展的今日,大量云计算资源的高效管理是运维领域的重要难题。准确的负载预测是应对这一难题的关键技术。针对该问题提出一种基于局部加权回归周期趋势分解算法(Seasonal and Trend decomposition using Loess,STL)、Holt-Winters模型和深度自回归模型(DeepAR)的组合预测模型STL-DeepAR-HW。先采用快速傅里叶变换和自相关函数提取数据的周期性特征,以提取到的最优周期对数据做STL分解,将数据分解为趋势项、季节项和余项;并用DeepAR和Holt-Winters分别预测趋势项和季节项,最后组合得到预测结果。在公开数据集AzurePublicDataset上进行实验,结果表明,与Transformer、Stacked-LSTM以及Prophet等模型相比,该组合模型在负载预测中具有更高的准确性和适用性。展开更多
云计算的快速发展使得服务器面临的负载压力逐渐增加,如何精准预测负载资源成为云中心资源分配与服务器安全运行的重要课题。现有的单一模型在捕捉全局特征方面存在不足,而组合模型在处理时序数据时的平稳性和解释性方面有所欠缺。因此...云计算的快速发展使得服务器面临的负载压力逐渐增加,如何精准预测负载资源成为云中心资源分配与服务器安全运行的重要课题。现有的单一模型在捕捉全局特征方面存在不足,而组合模型在处理时序数据时的平稳性和解释性方面有所欠缺。因此,提出一种基于NeuralProphet分解的卷积神经网络(CNN)-长短期记忆(LSTM)网络-注意力(Attention)机制的组合模型。NeuralProphet将负载数据分解为趋势、季节和自回归项分量,增强数据的平稳性和解释性,从而使模型能更高效地捕捉全局特征和长期依赖关系;并通过注意力机制动态权重分配,聚焦影响预测结果的关键特征,进一步提高对未来时刻的预测精度。在Alibaba Cluster Data V2018数据集上的实验结果表明,所提出的组合模型在预测精度和性能方面优于其他深度学习模型。与单一模型NeuralProphet及CNN-BiLSTM组合模型相比,该模型在R2评分上提高了17.9%,均方根误差(RMSE)降低了73.6%,平均绝对误差(MAE)降低了69.7%,对称平均绝对百分比误差(sMAPE)降低了65.3%,具备更高的预测准确性和鲁棒性,有助于提高云资源利用效率。展开更多
基金This work was supported by Shandong medical and health science and technology development plan project(No.202012070393).
文摘Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry under Grant No.2010-2011 and Chinese Post-doctoral Research Foundation
文摘One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金funded by NARI Group’s Independent Project of China(Grant No.524609230125)the Foundation of NARI-TECH Nanjing Control System Ltd.of China(Grant No.0914202403120020).
文摘Studies to enhance the management of electrical energy have gained considerable momentum in recent years. The question of how much energy will be needed in households is a pressing issue as it allows the management plan of the available resources at the power grids and consumer levels. A non-intrusive inference process can be adopted to predict the amount of energy required by appliances. In this study, an inference process of appliance consumption based on temporal and environmental factors used as a soft sensor is proposed. First, a study of the correlation between the electrical and environmental variables is presented. Then, a resampling process is applied to the initial data set to generate three other subsets of data. All the subsets were evaluated to deduce the adequate granularity for the prediction of the energy demand. Then, a cloud-assisted deep neural network model is designed to forecast short-term energy consumption in a residential area while preserving user privacy. The solution is applied to the consumption data of four appliances elected from a set of real household power data. The experiment results show that the proposed framework is effective for estimating consumption with convincing accuracy.
文摘Kubernetes容器云是当前流行的云计算技术,其默认的弹性伸缩方法HPA(Horizontal Pod Autoscaler)能对云原生应用进行横向扩缩容。但该方法存在以下问题:基于单一负载指标,使其难以适用于多样化云原生应用;基于当前负载进行弹性伸缩,使扩缩容过程具有明显的滞后性;基于滑动时间窗口算法进行弹性缩容,使缩容过程缓慢易造成系统资源浪费。针对上述问题,文中提出一种改进的弹性伸缩方法。设计一种动态加权融合算法将多种负载指标融合为综合负载因子,全面反映云原生应用的综合负载。提出CEEMDAN(Complete Ensemble Empirical Mode Decomposition with Adaptive Noise)-ARIMA(Autoregressive Integrated Moving Average Model)预测模型,基于该模型的预测负载值实现预先弹性伸缩以应对突发流量。提出快速缩容与滑动时间窗口相结合的方法,在确保应用服务质量的基础上减少系统资源浪费。实验结果表明,相较于HPA机制,改进的弹性伸缩方法在应对首次突发流量时的平均响应时间缩短了336.55%,流量结束后系统资源占用减少了50%,再次遇到突发流量时能迅速扩容,平均响应时间缩短66.83%。
文摘在信息化蓬勃发展的今日,大量云计算资源的高效管理是运维领域的重要难题。准确的负载预测是应对这一难题的关键技术。针对该问题提出一种基于局部加权回归周期趋势分解算法(Seasonal and Trend decomposition using Loess,STL)、Holt-Winters模型和深度自回归模型(DeepAR)的组合预测模型STL-DeepAR-HW。先采用快速傅里叶变换和自相关函数提取数据的周期性特征,以提取到的最优周期对数据做STL分解,将数据分解为趋势项、季节项和余项;并用DeepAR和Holt-Winters分别预测趋势项和季节项,最后组合得到预测结果。在公开数据集AzurePublicDataset上进行实验,结果表明,与Transformer、Stacked-LSTM以及Prophet等模型相比,该组合模型在负载预测中具有更高的准确性和适用性。
文摘云计算的快速发展使得服务器面临的负载压力逐渐增加,如何精准预测负载资源成为云中心资源分配与服务器安全运行的重要课题。现有的单一模型在捕捉全局特征方面存在不足,而组合模型在处理时序数据时的平稳性和解释性方面有所欠缺。因此,提出一种基于NeuralProphet分解的卷积神经网络(CNN)-长短期记忆(LSTM)网络-注意力(Attention)机制的组合模型。NeuralProphet将负载数据分解为趋势、季节和自回归项分量,增强数据的平稳性和解释性,从而使模型能更高效地捕捉全局特征和长期依赖关系;并通过注意力机制动态权重分配,聚焦影响预测结果的关键特征,进一步提高对未来时刻的预测精度。在Alibaba Cluster Data V2018数据集上的实验结果表明,所提出的组合模型在预测精度和性能方面优于其他深度学习模型。与单一模型NeuralProphet及CNN-BiLSTM组合模型相比,该模型在R2评分上提高了17.9%,均方根误差(RMSE)降低了73.6%,平均绝对误差(MAE)降低了69.7%,对称平均绝对百分比误差(sMAPE)降低了65.3%,具备更高的预测准确性和鲁棒性,有助于提高云资源利用效率。