In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the ...In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs.展开更多
Container live migration serves as the cornerstone of maintaining containerized workloads in cloud and edge datacenters,particularly for stateful applications.However,the de facto memory pre-copy-based migration faces...Container live migration serves as the cornerstone of maintaining containerized workloads in cloud and edge datacenters,particularly for stateful applications.However,the de facto memory pre-copy-based migration faces severe performance issues for containers with dynamically changing memory dirty pages.Existing research often overlooks such dynamic nature of memory pages of various workloads and their unpredictable relationship with system-level features,causing unwise stop-and-copy iterations of container migrations.This can prolong container migrations by tens of seconds,severely degrading application performance.To address these challenges,we introduce U^(2)CMigration,a user-unaware container live migration strategy for containerized workloads.It employs a lightweight and autonomous two-phase prediction by analyzing container memory pages across various workloads.We utilize the data shift prediction for stable memory pages(phase-1).For unstable memory pages(phase-2),we develop an attention-based prediction that jointly considers the spatio-temporal characteristics of memory pages and system-level features.Guided by dirty page predictions,we further develop a container live migration strategy that judiciously decides the optimal stop-and-copy iteration with the minimum amount of memory dirty pages.We have implemented an open-source prototype of U^(2)CMigration(https://doi.org/10.57760/sciencedb.32136)based on the CRIU(checkpoint/restore in userspace)project.Extensive prototype experiments demonstrate that U^(2)CMigration reduces the container migration duration by 26.1%–47.9%and the downtime by 21.3%–32.6%compared with the state-of-the-art solutions.展开更多
Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication b...Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication between CPU and GPU continues to evolve.Initially,the programmer explicitly controls the data transfer between CPU and GPU.To simplify programming and enable systemwide atomic memory operations,GPU vendors have developed a programming model that provides a single,virtual address space for accessing all CPU and GPU memories in the system.The page migration engine in this model automatically migrates pages between CPU and GPU on demand.To meet the needs of high-performance workloads,the page size tends to be larger.Limited by low bandwidth and high latency interconnects compared to GDDR,larger page migration has longer delay,which may reduce the overlap of computation and transmission,waste time to migrate unrequested data,block subsequent requests,and cause serious performance decline.In this paper,we propose partial page migration that only migrates the requested part of a page to reduce the migration unit,shorten the migration latency,and avoid the performance degradation of the full page migration when the page becomes larger.We show that partial page migration is possible to largely hide the performance overheads of full page migration.Compared with programmer controlled data transmission,when the page size is 2MB and the PCIe bandwidth is 16GB/sec,full page migration is 72.72×slower,while our partial page migration achieves 1.29×speedup.When the PCIe bandwidth is changed to 96GB/sec,full page migration is 18.85×slower,while our partial page migration provides 1.37×speedup.Additionally,we examine the performance impact that PCIe bandwidth and migration unit size have on execution time,enabling designers to make informed decisions.展开更多
Static cache partitioning can reduce inter- application cache interference and improve the composite performance of a cache-polluted application and a cache- sensitive application when they run on cores that share the...Static cache partitioning can reduce inter- application cache interference and improve the composite performance of a cache-polluted application and a cache- sensitive application when they run on cores that share the last level cache in the same multi-core processor. In a virtu- alized system, since different applications might run on dif- ferent virtual machines (VMs) in different time, it is inappli- cable to partition the cache statically in advance. This paper proposes a dynamic cache partitioning scheme that makes use of hot page detection and page migration to improve the com- posite performance of co-hosted virtual machines dynami- cally according to prior knowledge of cache-sensitive appli- cations. Experimental results show that the overhead of our page migration scheme is low, while in most cases, the com- posite performance is an improvement over free composition.展开更多
分析了细菌觅食优化(BFO)算法的原理以及当前的研究状况,主要根据心理学家爱德华·桑代克(E L Thordike)的经典效果律和经济学家巴莱多的巴莱多定律等对标准BFO算法存在的不足进行改进;将改进后的BFO算法在函数优化问题上进行仿真实...分析了细菌觅食优化(BFO)算法的原理以及当前的研究状况,主要根据心理学家爱德华·桑代克(E L Thordike)的经典效果律和经济学家巴莱多的巴莱多定律等对标准BFO算法存在的不足进行改进;将改进后的BFO算法在函数优化问题上进行仿真实验,实验结果表明改进后的BFO算法比标准BFO算法具有更快的收敛速度和更强的搜索性能。展开更多
文摘In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs.
基金supported in part by the National Natural Science Foundation of China under Grant No.62372184the Science and Technology Commission of Shanghai Municipality of China under Grant No.22DZ2229004the National Key Research and Development Plan of China under Grant No.2022YFB4501703.
文摘Container live migration serves as the cornerstone of maintaining containerized workloads in cloud and edge datacenters,particularly for stateful applications.However,the de facto memory pre-copy-based migration faces severe performance issues for containers with dynamically changing memory dirty pages.Existing research often overlooks such dynamic nature of memory pages of various workloads and their unpredictable relationship with system-level features,causing unwise stop-and-copy iterations of container migrations.This can prolong container migrations by tens of seconds,severely degrading application performance.To address these challenges,we introduce U^(2)CMigration,a user-unaware container live migration strategy for containerized workloads.It employs a lightweight and autonomous two-phase prediction by analyzing container memory pages across various workloads.We utilize the data shift prediction for stable memory pages(phase-1).For unstable memory pages(phase-2),we develop an attention-based prediction that jointly considers the spatio-temporal characteristics of memory pages and system-level features.Guided by dirty page predictions,we further develop a container live migration strategy that judiciously decides the optimal stop-and-copy iteration with the minimum amount of memory dirty pages.We have implemented an open-source prototype of U^(2)CMigration(https://doi.org/10.57760/sciencedb.32136)based on the CRIU(checkpoint/restore in userspace)project.Extensive prototype experiments demonstrate that U^(2)CMigration reduces the container migration duration by 26.1%–47.9%and the downtime by 21.3%–32.6%compared with the state-of-the-art solutions.
文摘Despite the increasing investment in integrated GPU and next-generation interconnect research,discrete GPU connected by PCIe still account for the dominant position of the market,the management of data communication between CPU and GPU continues to evolve.Initially,the programmer explicitly controls the data transfer between CPU and GPU.To simplify programming and enable systemwide atomic memory operations,GPU vendors have developed a programming model that provides a single,virtual address space for accessing all CPU and GPU memories in the system.The page migration engine in this model automatically migrates pages between CPU and GPU on demand.To meet the needs of high-performance workloads,the page size tends to be larger.Limited by low bandwidth and high latency interconnects compared to GDDR,larger page migration has longer delay,which may reduce the overlap of computation and transmission,waste time to migrate unrequested data,block subsequent requests,and cause serious performance decline.In this paper,we propose partial page migration that only migrates the requested part of a page to reduce the migration unit,shorten the migration latency,and avoid the performance degradation of the full page migration when the page becomes larger.We show that partial page migration is possible to largely hide the performance overheads of full page migration.Compared with programmer controlled data transmission,when the page size is 2MB and the PCIe bandwidth is 16GB/sec,full page migration is 72.72×slower,while our partial page migration achieves 1.29×speedup.When the PCIe bandwidth is changed to 96GB/sec,full page migration is 18.85×slower,while our partial page migration provides 1.37×speedup.Additionally,we examine the performance impact that PCIe bandwidth and migration unit size have on execution time,enabling designers to make informed decisions.
文摘Static cache partitioning can reduce inter- application cache interference and improve the composite performance of a cache-polluted application and a cache- sensitive application when they run on cores that share the last level cache in the same multi-core processor. In a virtu- alized system, since different applications might run on dif- ferent virtual machines (VMs) in different time, it is inappli- cable to partition the cache statically in advance. This paper proposes a dynamic cache partitioning scheme that makes use of hot page detection and page migration to improve the com- posite performance of co-hosted virtual machines dynami- cally according to prior knowledge of cache-sensitive appli- cations. Experimental results show that the overhead of our page migration scheme is low, while in most cases, the com- posite performance is an improvement over free composition.
文摘分析了细菌觅食优化(BFO)算法的原理以及当前的研究状况,主要根据心理学家爱德华·桑代克(E L Thordike)的经典效果律和经济学家巴莱多的巴莱多定律等对标准BFO算法存在的不足进行改进;将改进后的BFO算法在函数优化问题上进行仿真实验,实验结果表明改进后的BFO算法比标准BFO算法具有更快的收敛速度和更强的搜索性能。