In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecomm...In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecommunications networks,the need for high-end security and efficient management of these large data files is a great challenge for network designers.The proposed model provides the efficient real-time virtual data storage of UE data files(light and heavy)using an object storage system MinIO having inbuilt Software Development Kits(SDKs)that are compatible with Amazon(S3)Application Program Interface(API)making operations like file uploading,and data retrieval extremely efficient as compared to legacy virtual storage system requiring low-level HTTP requests for data management.To provide integrity,authenticity,and confidentiality(integrity checking via an authentication tag)to the data files of UE,the encrypted algorithm 256-bit oriented-Advanced Encryption Standard(256-AES)in Galois/Counter Mode(GCM)is utilized in combination with MinIO.The AES-based MinIO signifies in more secure and faster approach than older models like Cipher Block Chaining(CBC).The performance of the proposed model is analyzed using the Iperf utility to perform the Teletraffic parametric(bandwidth,throughput,latency,and transmission delay)analysis for three different cases namely:(a)light UE traffic(uploading and retrieval)(b)heavy UE traffic(uploading and retrieval)and(c)comparison of Teletraffic parameters namely:bandwidth(Bava),throughput(Tput),data transfer(D_(Trans)),latency(L_(ms)),and transmission delay(TDelay)obtained from proposed method with legacy virtual storage methods.The results show that the suggested MinIO-based system outperforms conventional systems in terms of latency,encryption efficiency,and performance under varying data load conditions.展开更多
The virtualized radio access network(v RAN) could implement virtualized baseband functions on general-purpose platforms and expand the processing capacity of the radio access network(RAN) significantly.In this paper,a...The virtualized radio access network(v RAN) could implement virtualized baseband functions on general-purpose platforms and expand the processing capacity of the radio access network(RAN) significantly.In this paper,a Not Only Stack(NO Stack) based vR AN is proposed to be employed in the fifth generation(5G) mobile communication system.It adopts advanced virtualization technologies to maintain flexible and sustainable.The baseband processing and storage resources should be sliced and orchestrated agilely to support multi radio access technology(multiRAT) .Also it is analyzed and demonstrated by different use cases to validate the benefits.The proposed v RAN reduces signaling overheads and service response time in the bearer establishment procedure.Concluded from the analyses and demonstrations,the NO Stack based v RAN could support multi-RAT convergence and flexible networking effectively.展开更多
This paper presents component importance analysis for virtualized system with live migration. The component importance analysis is significant to determine the system design of virtualized system from availability and...This paper presents component importance analysis for virtualized system with live migration. The component importance analysis is significant to determine the system design of virtualized system from availability and cost points of view. This paper discusses the importance of components with respect to system availability. Specifically, we introduce two different component importance analyses for hybrid model (fault trees and continuous-time Markov chains) and continuous-time Markov chains, and show the analysis for existing probabilistic models for virtualized system. In numerical examples, we illustrate the quantitative component importance analysis for virtualized system with live migration.展开更多
In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significa...In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.展开更多
Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for s...Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for satisfying the inviolable property by taking advantage of hardware security.However,for Android,TEE technologies still contain restrictions and limitations.The first issue is that non-original equipment manufacturer developers have limited access to the functionality of hardware-based TEE.Another issue of hardware-based TEE is the cross-platform problem.Since every mobile device supports different TEE vendors,it becomes an obstacle for developers to migrate their trusted applications to other Android devices.A software-based TEE solution is a potential approach that allows developers to customize,package and deliver the product efficiently.Motivated by that idea,this paper introduces a VTEE model,a software-based TEE solution,on Android devices.This research contributes to the analysis of the feasibility of using a virtualized TEE on Android devices by considering two metrics:computing performance and security.The experiment shows that the VTEE model can host other software-based TEE services and deliver various cryptography TEE functions on theAndroid environment.The security evaluation shows that adding the VTEE model to the existing Android does not addmore security issues to the traditional design.Overall,this paper shows applicable solutions to adjust the balance between computing performance and security.展开更多
To save cost, more and more users choose provision resources at the granularity of virtual machines in cluster systems, especially data centres. Maintaining a consistent member view is the foundation of reliable clust...To save cost, more and more users choose provision resources at the granularity of virtual machines in cluster systems, especially data centres. Maintaining a consistent member view is the foundation of reliable cluster managements, and it also raises several challenge issues for large scale cluster systems deployed with virtual machines (which we call virtualized clusters). In this paper, we introduce our experience in design and implementation of scalable member view management on large-scale virtual clusters. Our research contributions include three-aspects : 1 ) we propose a scalable and reliable management infrastructure that combines a peer-to-peer structure and a hierarchy structure to maintain a consistent member view in virtual clusters; 2 ) we present a light-weighted group membership algorithm that can reach the consistent member view within a single round of message exchange; 3 ) we design and implement a scalable membership service that can provide virtual machines and maintain a consistent member view in virtual clusters. Our work is verified on Dawning 5000A, which ranked No. 10 of Top 500 super computers in November, 2008.展开更多
The concept of virtualization machines is not new, but it is increasing vastly and gaining popularity in the IT world. Hypervisors are also popular for security as a means of isolation. The virtualization of informati...The concept of virtualization machines is not new, but it is increasing vastly and gaining popularity in the IT world. Hypervisors are also popular for security as a means of isolation. The virtualization of information technology infrastructure creates the enablement of IT resources to be shared and used on several other devices and applications;this increases the growth of business needs. The environment created by virtualization is not restricted to any configuration physically or execution. The resources of a computer are shared logically. Hypervisors help in virtualization of hardware that is a software interact with the physical system, enabling or providing virtualized hardware environment to support multiple running operating system simultaneously utilizing one physical server. This paper explores the benefits, types and security issues of Virtualization Hypervisor in virtualized hardware environment.展开更多
Smartphones and cloud computing technologies have enabled the development of sophisticated mobile applications.Still,many of these applications do not perform well due to limited computation,data storage,network bandw...Smartphones and cloud computing technologies have enabled the development of sophisticated mobile applications.Still,many of these applications do not perform well due to limited computation,data storage,network bandwidth,and battery capacity in a mobile phone.While applications can be redesigned with client-server models to benefit from cloud services,users are no longer in full control of the application.This is also a serious concern.We propose an innovative framework for executing mobile applications in a virfualized cloud environment.With encryption and isolation,this environment is controlled by the user and protected against eavesdropping from cloud providers.We have developed efficient schemes for migrating applications and synchronizing data between execution environments.Performance and power issues within a virtualized execution environment are also addressed using power saving and scheduling techniques that enable automatic,seamless application migration.展开更多
Software-defined networks (SDN) have attracted much attention recently because of their flexibility in terms of network management. Increasingly, SDN is being introduced into wireless networks to form wireless SDN. ...Software-defined networks (SDN) have attracted much attention recently because of their flexibility in terms of network management. Increasingly, SDN is being introduced into wireless networks to form wireless SDN. One enabling technology for wireless SDN is network virtualization, which logically divides one wireless network element, such as a base station, into multiple slices, and each slice serving as a standalone virtual BS. In this way, one physical mobile wireless network can be partitioned into multiple virtual networks in a software-defined manner. Wireless virtual networks comprising virtual base stations also need to provide QoS to mobile end-user services in the same context as their physical hosting networks. One key QoS parameter is delay. This paper presents a delay model for software-defined wireless virtual networks. Network calculus is used in the modelling. In particular, stochastic network calculus, which describes more realistic models than deterministic network calculus, is used. The model enables theoretical investigation of wireless SDN, which is largely dominated by either algorithms or prototype implementations.展开更多
Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications...Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications’ computation ability needs. These applications can be presented as meta-job of user’s demand. The total processing time of these jobs may need data transmission time over the Internet as well as the completed time of jobs to execute on the virtual machine must be taken into account. In this paper, we presented V-heuristics scheduling algorithm for allocation of virtualized network and computing resources under user’s constraint which applied into a service-oriented resource broker for jobs scheduling. This scheduling algorithm takes into account both data transmission time and computation time that related to virtualized network and virtual machine. The simulation results are compared with three different types of heuristic algorithms under conventional network or virtual network conditions such as MCT, Min-Min and Max-Min. e evaluate these algorithms within a simulated cloud environment via an abilenenetwork topology which is real physical core network topology. These experimental results show that V-heuristic scheduling algorithm achieved significant performance gain for a variety of applications in terms of load balance, Makespan, average resource utilization and total processing time.展开更多
In this paper,under the premise of analyzing the necessities to carry out computer virtualized education in modem education and based on computer virtualized training's value embodied in education,the ways to real...In this paper,under the premise of analyzing the necessities to carry out computer virtualized education in modem education and based on computer virtualized training's value embodied in education,the ways to realize the educational value of computer virtualized training are elaborated.Computer virtualized training is of great significance to the diversity of education and the modernization development.展开更多
Power control for virtualized enviromnents has is keeping underlying infrastructure in reasonably low power gained much attention recently. One of the major challenges states and achieving service-level objectives (S...Power control for virtualized enviromnents has is keeping underlying infrastructure in reasonably low power gained much attention recently. One of the major challenges states and achieving service-level objectives (SLOs) of upper applications as well. Existing solutions, however, cannot effectively tackle this problem for virtualized environments. In this paper, we propose an automated power control solution for such scenarios in hope of making some progress. The major advantage of our solution is being able to precisely control the CPU frequency levels of a physical environment and the CPU power allocations among virtual machines with respect to the SLOs of multiple applications. Based on control theory and online model estimation, our solution can adapt to the variations of application power demands. Additionally, our solution can simultaneously manage the CPU power control for all virtual machines according to their dependencies at either the application-level or the infrastructure-level. The experimental evaluation demonstrates that our solution outperforms three state-of-the-art methods in terms of achieving the application SLOs with low infrastructure power consumption.展开更多
Modern datacenter servers hosting popular Internet services face significant and multi-facet challenges in performance and power control. The user-perceived performance is the result of a complex interaction of comple...Modern datacenter servers hosting popular Internet services face significant and multi-facet challenges in performance and power control. The user-perceived performance is the result of a complex interaction of complex workloads in a very complex underlying system. Highly dynamic and bursty workloads of Internet services fluctuate over multiple time scales, which has a significant impact on processing and power demands of datacenter servers. High-density servers apply virtualization technology for capacity planning and system manageability. Such virtuMized computer systems are increasingly large and complex. This paper surveys representative approaches to autonomic performance and power control on virtualized servers, which control the quality of service provided by virtualized resources, improve the energy efficiency of the underlying system, and reduce the burden of complex system management from human operators. It then presents three designed self-adaptive resource management techniques based on machine learning and control for percentile-based response time assurance, non-intrusive energy-efficient performance isolation, and joint performance and power guarantee on virtualized servers. The techniques were implemented and evaluated in a testbed of virtualized servers hosting benchmark applications. Finally, two research trends are identified and discussed for sustainable cloud computing in green datacenters.展开更多
Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the man...Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the management and scheduling of simulation jobs are playing an essential role to offer efficient and high productivity computational service.In this paper,we design a management and scheduling service framework for simulation jobs in two-tier virtualization-based private cloud data center,named simulation execution as a service(SimEaaS).It aims at releasing users from complex simulation running settings,while guaranteeing the QoS requirements adaptively.Furthermore,a novel job scheduling algorithm named adaptive deadline-aware job size adjustment(ADaSA)algorithm is designed to realize high job responsiveness under QoS requirement for SimEaaS.ADaSA tries to make full use of the idle fragmentation resources by tuning the number of requested processes of submitted jobs in the queue adaptively,while guaranteeing that jobs’deadline requirements are not violated.Extensive experiments with trace-driven simulation are conducted to evaluate the performance of our ADaSA.The results show that ADaSA outperforms both cloud-based job scheduling algorithm KCEASY and traditional EASY in terms of response time(up to 90%)and bounded slow down(up to 95%),while obtains approximately equivalent deadline-missed rate.ADaSA also outperforms two representative moldable scheduling algorithms in terms of deadline-missed rate(up to 60%).展开更多
RENEWING THE FORBIDDEN CITY’S CENTURY-OLD LEGACY.Oriental Outlook.27 November 2025.At sunrise,the Forbidden City glows under a veil of gold;at night,it retreats into quiet dignity.But the palace never really sleeps.A...RENEWING THE FORBIDDEN CITY’S CENTURY-OLD LEGACY.Oriental Outlook.27 November 2025.At sunrise,the Forbidden City glows under a veil of gold;at night,it retreats into quiet dignity.But the palace never really sleeps.As visitors depart,the“digital relic vault”awakens online,where porcelain,calligraphy,jade and timepieces reveal their beauty in virtual form.History continues to breathe in the data stream.展开更多
BACKGROUND Orthopaedic surgical education has traditionally depended on the apprenticeship model of“see one,do one,teach one”.However,reduced operative exposure,stricter work-hour regulations,medicolegal constraints...BACKGROUND Orthopaedic surgical education has traditionally depended on the apprenticeship model of“see one,do one,teach one”.However,reduced operative exposure,stricter work-hour regulations,medicolegal constraints,and patient safety concerns have constrained its practicality.Simulation-based training has become a reliable,safe,and cost-efficient alternative.Dry lab techniques,especially virtual and augmented reality,make up 78%of current dry lab research,whereas wet labs still set the standard for anatomical realism.AIM To evaluate the effectiveness,limitations,and future directions of wet and dry lab simulation in orthopaedic training.METHODS A scoping review was carried out across four databases-PubMed,Cochrane Library,Web of Science,and EBSCOhost-up to 2025.Medical Subject Headings included:"Orthopaedic Education","Wet Lab","Dry Lab","Simulation Training","Virtual Reality",and"Surgical Procedure".Eligible studies focused on orthopaedic or spinal surgical education,employed wet or dry lab techniques,and assessed training effectiveness.Exclusion criteria consisted of non-English publications,abstracts only,non-orthopaedic research,and studies unrelated to simulation.Two reviewers independently screened titles,abstracts,and full texts,resolving discrepancies with a third reviewer.RESULTS From 1851 records,101 studies met inclusion:78 on dry labs,7 on wet labs,4 on both.Virtual reality(VR)simulations were most common,with AI increasingly used for feedback and assessment.Cadaveric training remains the gold standard for accuracy and tactile feedback,while dry labs-especially VR-offer scalability,lower cost(40%-60%savings in five studies),and accessibility for novices.Senior residents prefer wet labs for complex tasks;juniors favour dry labs for basics.Challenges include limited transferability data,lack of standard outcome metrics,and ethical concerns about cadaver use and AI assessment.CONCLUSION Wet and dry labs each have unique strengths in orthopaedic training.A hybrid approach combining both,supported by standardised assessments and outcome studies,is most effective.Future efforts should aim for uniform reporting,integrating new technologies,and policy support for hybrid curricula to enhance skills and patient care.展开更多
This article addresses the challenges associatedwith three-dimensional(3D)design models in the power industry,such as substantial data volume and complex calculations.Traditional data exchange methods pose difficultie...This article addresses the challenges associatedwith three-dimensional(3D)design models in the power industry,such as substantial data volume and complex calculations.Traditional data exchange methods pose difficulties in coordinating design efforts among project participants,and current computing systems cannot meet the performance demands of 3D power design.To overcome these challenges,this study thoroughly evaluates both public and private clouds and ultimately selects a hybrid cloud infrastructure for a custom-built 3D design platform that addresses the specific needs of the power industry.Employing optimization algorithms for 3D design models,we reduce model size and implement automated synchronization of platform data through code.We also propose different levels of cloud services and identify the necessary functionalities to be fulfilled by each layer of the 3D design cloud platform.Nested virtualization technology is used to create cloud desktops with different hardware requirements.Ultimately,we successfully establish a comprehensive framework for a 3D design platform specifically tailored to power design enterprises.展开更多
Software-defined networking(SDN) enables the network virtualization through SDN hypervisors to share the underlying physical SDN network among multiple logically isolated virtual SDN networks(v SDNs),each with its own...Software-defined networking(SDN) enables the network virtualization through SDN hypervisors to share the underlying physical SDN network among multiple logically isolated virtual SDN networks(v SDNs),each with its own controller.The v SDN embedding,which refers to mapping a number of v SDNs to the same substrate SDN network,is a key problem in the SDN virtualization environment.However,due to the distinctions of the SDN,such as the logically centralized controller and different virtualization technologies,most of the existing embedding algorithms cannot be applied directly to SDN virtualization.In this paper,we consider controller placement and virtual network embedding as a joint vS DN embedding problem,and formulate it into an integer linear programming with objectives of minimizing the embedding cost and the controller-to-switch delay for each v SDN.Moreover,we propose a novel online vS DN embedding algorithm called CO-v SDNE,which consists of a node mapping stage and a link mapping stage.In the node mapping stage,CO-vS DNE maps the controller and the virtual nodes to the substrate nodes on the basis of the controller-to-switch delay and takes into account the subsequent link mapping at the same time.In the link mapping stage,CO-v SDNE adopts the k-shortest path algorithm to map the virtual links.The evaluation results with simulation and Mininet emulation show that the proposed CO-v SDNE not only significantly increases the long-term revenue to the cost ratio and acceptance ratio while guaranteeing low average and maximum controller-to-switch delay,but also achieves good v SDN performance in terms of end-to-end delay and throughput.展开更多
Storage class memory (SCM) has the potential to revolutionize the memory landscape by its non-volatile and byte-addressable properties. However, there is little published work about exploring its usage for modem vir...Storage class memory (SCM) has the potential to revolutionize the memory landscape by its non-volatile and byte-addressable properties. However, there is little published work about exploring its usage for modem virtualized cloud infrastructure. We propose SCM-vWrite, a novel architecture designed around SCM, to ease the performance interference of virtualized storage subsystem. Through a case study on a typical virtualized cloud system, we first describe why cur- rent writeback manners are not suitable for a virtualized en- vironment, then design and implement SCM-vWrite to im- prove this problem. We also use typical benchmarks and re- alistic workloads to evaluate its performance. Compared with the traditional method on a conventional architecture, the ex- perimental result shows that SCM-vWrite can coordinate the writeback flows more effectively among multiple co-located vip operating systems, achieving a better disk I/O perfor- mance without any loss of reliability.展开更多
With the rapid increase of memory consumption by applications running on cloud data centers,we need more efficient memory management in a virtualized environment.Exploiting huge pages becomes more critical for a virtu...With the rapid increase of memory consumption by applications running on cloud data centers,we need more efficient memory management in a virtualized environment.Exploiting huge pages becomes more critical for a virtual machine's performance when it runs large working set size programs.Programs with large working set sizes are more sensitive to memory allocation,which requires us to quickly adjust the virtual machine's memory to accommodate memory phase changes.It would be much more efficient if we could adjust virtual machines'memory at the granularity of huge pages.However,existing virtual machine memory reallocation techniques,such as ballooning,do not support huge pages.In addition,in order to drive effective memory reallocation,we need to predict the actual memory demand of a virtual machine.We find that traditional memory demand estimation methods designed for regular pages cannot be simply ported to a system adopting huge pages.How to adjust the memory of virtual machines timely and effectively according to the periodic change of memory demand is another challenge we face.This paper proposes a dynamic huge page based memory balancing system(HPMBS)for efficient memory management in a virtualized environment.We first rebuild the ballooning mechanism in order to dispatch memory in the granularity of huge pages.We then design and implement a huge page working set size estimation mechanism which can accurately estimate a virtual machine's memory demand in huge pages environments.Combining these two mechanisms,we finally use an algorithm based on dynamic programming to achieve dynamic memory balancing.Experiments show that our system saves memory and improves overall system performance with low overhead.展开更多
文摘In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecommunications networks,the need for high-end security and efficient management of these large data files is a great challenge for network designers.The proposed model provides the efficient real-time virtual data storage of UE data files(light and heavy)using an object storage system MinIO having inbuilt Software Development Kits(SDKs)that are compatible with Amazon(S3)Application Program Interface(API)making operations like file uploading,and data retrieval extremely efficient as compared to legacy virtual storage system requiring low-level HTTP requests for data management.To provide integrity,authenticity,and confidentiality(integrity checking via an authentication tag)to the data files of UE,the encrypted algorithm 256-bit oriented-Advanced Encryption Standard(256-AES)in Galois/Counter Mode(GCM)is utilized in combination with MinIO.The AES-based MinIO signifies in more secure and faster approach than older models like Cipher Block Chaining(CBC).The performance of the proposed model is analyzed using the Iperf utility to perform the Teletraffic parametric(bandwidth,throughput,latency,and transmission delay)analysis for three different cases namely:(a)light UE traffic(uploading and retrieval)(b)heavy UE traffic(uploading and retrieval)and(c)comparison of Teletraffic parameters namely:bandwidth(Bava),throughput(Tput),data transfer(D_(Trans)),latency(L_(ms)),and transmission delay(TDelay)obtained from proposed method with legacy virtual storage methods.The results show that the suggested MinIO-based system outperforms conventional systems in terms of latency,encryption efficiency,and performance under varying data load conditions.
基金supported by the China's 863 Project(No.2015AA01A706)the National Science and Technology Major Project(No.2016ZX03001017)+1 种基金the Science and Technology Program of Beijing(No.D161100001016002)the Science and Technology Cooperation Projects(No.2015DFT10160B)
文摘The virtualized radio access network(v RAN) could implement virtualized baseband functions on general-purpose platforms and expand the processing capacity of the radio access network(RAN) significantly.In this paper,a Not Only Stack(NO Stack) based vR AN is proposed to be employed in the fifth generation(5G) mobile communication system.It adopts advanced virtualization technologies to maintain flexible and sustainable.The baseband processing and storage resources should be sliced and orchestrated agilely to support multi radio access technology(multiRAT) .Also it is analyzed and demonstrated by different use cases to validate the benefits.The proposed v RAN reduces signaling overheads and service response time in the bearer establishment procedure.Concluded from the analyses and demonstrations,the NO Stack based v RAN could support multi-RAT convergence and flexible networking effectively.
文摘This paper presents component importance analysis for virtualized system with live migration. The component importance analysis is significant to determine the system design of virtualized system from availability and cost points of view. This paper discusses the importance of components with respect to system availability. Specifically, we introduce two different component importance analyses for hybrid model (fault trees and continuous-time Markov chains) and continuous-time Markov chains, and show the analysis for existing probabilistic models for virtualized system. In numerical examples, we illustrate the quantitative component importance analysis for virtualized system with live migration.
基金funded by the key project of the National Natural Science Foundation of China (No.61431001)the National High-Tech R&D Program (863 Program 2015AA01A705)New Technology Star Plan of Beijing (No.xx2013052)
文摘In centralized cellular network architecture,the concept of virtualized Base Station(VBS) becomes attracting since it enables all base stations(BSs) to share computing resources in a dynamic manner. This can significantly improve the utilization efficiency of computing resources. In this paper,we study the computing resource allocation strategy for one VBS by considering the non-negligible effect of delay introduced by switches. Specifically,we formulate the VBS's sum computing rate maximization as a set optimization problem. To address this problem,we firstly propose a computing resource schedule algorithm,namely,weight before one-step-greedy(WBOSG),which has linear computation complexity and considerable performance. Then,OSG retreat(OSG-R) algorithm is developed to further improve the system performance at the expense of computational complexity. Simulation results under practical setting are provided to validate the proposed two algorithms.
基金This work was partly supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea Government(MSIT),(No.2020-0-00952,Development of 5G edge security technology for ensuring 5G+service stability and availability,50%)the Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the MSIT(Ministry of Science and ICT),Korea(No.IITP-2022-2020-0-01602,ITRC(Information Technology Research Center)support program,50%).
文摘Nowadays,with the significant growth of the mobile market,security issues on the Android Operation System have also become an urgent matter.Trusted execution environment(TEE)technologies are considered an option for satisfying the inviolable property by taking advantage of hardware security.However,for Android,TEE technologies still contain restrictions and limitations.The first issue is that non-original equipment manufacturer developers have limited access to the functionality of hardware-based TEE.Another issue of hardware-based TEE is the cross-platform problem.Since every mobile device supports different TEE vendors,it becomes an obstacle for developers to migrate their trusted applications to other Android devices.A software-based TEE solution is a potential approach that allows developers to customize,package and deliver the product efficiently.Motivated by that idea,this paper introduces a VTEE model,a software-based TEE solution,on Android devices.This research contributes to the analysis of the feasibility of using a virtualized TEE on Android devices by considering two metrics:computing performance and security.The experiment shows that the VTEE model can host other software-based TEE services and deliver various cryptography TEE functions on theAndroid environment.The security evaluation shows that adding the VTEE model to the existing Android does not addmore security issues to the traditional design.Overall,this paper shows applicable solutions to adjust the balance between computing performance and security.
基金Supported by the High Technology Research and Development Programme of China (No. 2006AA01 A102, 2009AA01 A129 ) and the National Natural Science Foundation of China ( No. 60703020).
文摘To save cost, more and more users choose provision resources at the granularity of virtual machines in cluster systems, especially data centres. Maintaining a consistent member view is the foundation of reliable cluster managements, and it also raises several challenge issues for large scale cluster systems deployed with virtual machines (which we call virtualized clusters). In this paper, we introduce our experience in design and implementation of scalable member view management on large-scale virtual clusters. Our research contributions include three-aspects : 1 ) we propose a scalable and reliable management infrastructure that combines a peer-to-peer structure and a hierarchy structure to maintain a consistent member view in virtual clusters; 2 ) we present a light-weighted group membership algorithm that can reach the consistent member view within a single round of message exchange; 3 ) we design and implement a scalable membership service that can provide virtual machines and maintain a consistent member view in virtual clusters. Our work is verified on Dawning 5000A, which ranked No. 10 of Top 500 super computers in November, 2008.
文摘The concept of virtualization machines is not new, but it is increasing vastly and gaining popularity in the IT world. Hypervisors are also popular for security as a means of isolation. The virtualization of information technology infrastructure creates the enablement of IT resources to be shared and used on several other devices and applications;this increases the growth of business needs. The environment created by virtualization is not restricted to any configuration physically or execution. The resources of a computer are shared logically. Hypervisors help in virtualization of hardware that is a software interact with the physical system, enabling or providing virtualized hardware environment to support multiple running operating system simultaneously utilizing one physical server. This paper explores the benefits, types and security issues of Virtualization Hypervisor in virtualized hardware environment.
基金supported in part by a grant from the National Science Council under No.98-2220-E-002-020,99-2220-E-002-026,and 95-2221-E-002-098-MY3
文摘Smartphones and cloud computing technologies have enabled the development of sophisticated mobile applications.Still,many of these applications do not perform well due to limited computation,data storage,network bandwidth,and battery capacity in a mobile phone.While applications can be redesigned with client-server models to benefit from cloud services,users are no longer in full control of the application.This is also a serious concern.We propose an innovative framework for executing mobile applications in a virfualized cloud environment.With encryption and isolation,this environment is controlled by the user and protected against eavesdropping from cloud providers.We have developed efficient schemes for migrating applications and synchronizing data between execution environments.Performance and power issues within a virtualized execution environment are also addressed using power saving and scheduling techniques that enable automatic,seamless application migration.
基金supported in part by the grant from the National Natural Science Foundation of China (60973129)
文摘Software-defined networks (SDN) have attracted much attention recently because of their flexibility in terms of network management. Increasingly, SDN is being introduced into wireless networks to form wireless SDN. One enabling technology for wireless SDN is network virtualization, which logically divides one wireless network element, such as a base station, into multiple slices, and each slice serving as a standalone virtual BS. In this way, one physical mobile wireless network can be partitioned into multiple virtual networks in a software-defined manner. Wireless virtual networks comprising virtual base stations also need to provide QoS to mobile end-user services in the same context as their physical hosting networks. One key QoS parameter is delay. This paper presents a delay model for software-defined wireless virtual networks. Network calculus is used in the modelling. In particular, stochastic network calculus, which describes more realistic models than deterministic network calculus, is used. The model enables theoretical investigation of wireless SDN, which is largely dominated by either algorithms or prototype implementations.
文摘Cloud computing technology facilitates computing-intensive applications by providing virtualized resources which can be dynamically provisioned. However, user’s requests are varied according to different applications’ computation ability needs. These applications can be presented as meta-job of user’s demand. The total processing time of these jobs may need data transmission time over the Internet as well as the completed time of jobs to execute on the virtual machine must be taken into account. In this paper, we presented V-heuristics scheduling algorithm for allocation of virtualized network and computing resources under user’s constraint which applied into a service-oriented resource broker for jobs scheduling. This scheduling algorithm takes into account both data transmission time and computation time that related to virtualized network and virtual machine. The simulation results are compared with three different types of heuristic algorithms under conventional network or virtual network conditions such as MCT, Min-Min and Max-Min. e evaluate these algorithms within a simulated cloud environment via an abilenenetwork topology which is real physical core network topology. These experimental results show that V-heuristic scheduling algorithm achieved significant performance gain for a variety of applications in terms of load balance, Makespan, average resource utilization and total processing time.
文摘In this paper,under the premise of analyzing the necessities to carry out computer virtualized education in modem education and based on computer virtualized training's value embodied in education,the ways to realize the educational value of computer virtualized training are elaborated.Computer virtualized training is of great significance to the diversity of education and the modernization development.
基金supported by the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant No.2012BAH46B03the National HeGaoJi Key Project under Grant No.2013ZX01039-002-001-001the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA06030200
文摘Power control for virtualized enviromnents has is keeping underlying infrastructure in reasonably low power gained much attention recently. One of the major challenges states and achieving service-level objectives (SLOs) of upper applications as well. Existing solutions, however, cannot effectively tackle this problem for virtualized environments. In this paper, we propose an automated power control solution for such scenarios in hope of making some progress. The major advantage of our solution is being able to precisely control the CPU frequency levels of a physical environment and the CPU power allocations among virtual machines with respect to the SLOs of multiple applications. Based on control theory and online model estimation, our solution can adapt to the variations of application power demands. Additionally, our solution can simultaneously manage the CPU power control for all virtual machines according to their dependencies at either the application-level or the infrastructure-level. The experimental evaluation demonstrates that our solution outperforms three state-of-the-art methods in terms of achieving the application SLOs with low infrastructure power consumption.
基金supported in part by the National Science Foundation of USA under Grant Nos.CNS-0844983(CAREER Award)and CNS-1217979the National Natural Science Foundation of China under Grant No.61328203
文摘Modern datacenter servers hosting popular Internet services face significant and multi-facet challenges in performance and power control. The user-perceived performance is the result of a complex interaction of complex workloads in a very complex underlying system. Highly dynamic and bursty workloads of Internet services fluctuate over multiple time scales, which has a significant impact on processing and power demands of datacenter servers. High-density servers apply virtualization technology for capacity planning and system manageability. Such virtuMized computer systems are increasingly large and complex. This paper surveys representative approaches to autonomic performance and power control on virtualized servers, which control the quality of service provided by virtualized resources, improve the energy efficiency of the underlying system, and reduce the burden of complex system management from human operators. It then presents three designed self-adaptive resource management techniques based on machine learning and control for percentile-based response time assurance, non-intrusive energy-efficient performance isolation, and joint performance and power guarantee on virtualized servers. The techniques were implemented and evaluated in a testbed of virtualized servers hosting benchmark applications. Finally, two research trends are identified and discussed for sustainable cloud computing in green datacenters.
基金supported by Scientific Research Plan of National University of Defense Technology under Grant No.ZK-20-38National Key Research&Development(R&D)Plan under Grant No.2017YFC0803300+2 种基金the National Natural Science Foundation of China under Grant Nos.71673292,71673294,61503402 and 61673388National Social Science Foundation of China under Grant No.17CGL047Guangdong Key Laboratory for Big Data Analysis and Simulation of Public Opinion.
文摘Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the management and scheduling of simulation jobs are playing an essential role to offer efficient and high productivity computational service.In this paper,we design a management and scheduling service framework for simulation jobs in two-tier virtualization-based private cloud data center,named simulation execution as a service(SimEaaS).It aims at releasing users from complex simulation running settings,while guaranteeing the QoS requirements adaptively.Furthermore,a novel job scheduling algorithm named adaptive deadline-aware job size adjustment(ADaSA)algorithm is designed to realize high job responsiveness under QoS requirement for SimEaaS.ADaSA tries to make full use of the idle fragmentation resources by tuning the number of requested processes of submitted jobs in the queue adaptively,while guaranteeing that jobs’deadline requirements are not violated.Extensive experiments with trace-driven simulation are conducted to evaluate the performance of our ADaSA.The results show that ADaSA outperforms both cloud-based job scheduling algorithm KCEASY and traditional EASY in terms of response time(up to 90%)and bounded slow down(up to 95%),while obtains approximately equivalent deadline-missed rate.ADaSA also outperforms two representative moldable scheduling algorithms in terms of deadline-missed rate(up to 60%).
文摘RENEWING THE FORBIDDEN CITY’S CENTURY-OLD LEGACY.Oriental Outlook.27 November 2025.At sunrise,the Forbidden City glows under a veil of gold;at night,it retreats into quiet dignity.But the palace never really sleeps.As visitors depart,the“digital relic vault”awakens online,where porcelain,calligraphy,jade and timepieces reveal their beauty in virtual form.History continues to breathe in the data stream.
文摘BACKGROUND Orthopaedic surgical education has traditionally depended on the apprenticeship model of“see one,do one,teach one”.However,reduced operative exposure,stricter work-hour regulations,medicolegal constraints,and patient safety concerns have constrained its practicality.Simulation-based training has become a reliable,safe,and cost-efficient alternative.Dry lab techniques,especially virtual and augmented reality,make up 78%of current dry lab research,whereas wet labs still set the standard for anatomical realism.AIM To evaluate the effectiveness,limitations,and future directions of wet and dry lab simulation in orthopaedic training.METHODS A scoping review was carried out across four databases-PubMed,Cochrane Library,Web of Science,and EBSCOhost-up to 2025.Medical Subject Headings included:"Orthopaedic Education","Wet Lab","Dry Lab","Simulation Training","Virtual Reality",and"Surgical Procedure".Eligible studies focused on orthopaedic or spinal surgical education,employed wet or dry lab techniques,and assessed training effectiveness.Exclusion criteria consisted of non-English publications,abstracts only,non-orthopaedic research,and studies unrelated to simulation.Two reviewers independently screened titles,abstracts,and full texts,resolving discrepancies with a third reviewer.RESULTS From 1851 records,101 studies met inclusion:78 on dry labs,7 on wet labs,4 on both.Virtual reality(VR)simulations were most common,with AI increasingly used for feedback and assessment.Cadaveric training remains the gold standard for accuracy and tactile feedback,while dry labs-especially VR-offer scalability,lower cost(40%-60%savings in five studies),and accessibility for novices.Senior residents prefer wet labs for complex tasks;juniors favour dry labs for basics.Challenges include limited transferability data,lack of standard outcome metrics,and ethical concerns about cadaver use and AI assessment.CONCLUSION Wet and dry labs each have unique strengths in orthopaedic training.A hybrid approach combining both,supported by standardised assessments and outcome studies,is most effective.Future efforts should aim for uniform reporting,integrating new technologies,and policy support for hybrid curricula to enhance skills and patient care.
基金supported by the Innovation Project of China Energy EngineeringGroup Co.,Ltd.-Electrical 3DDesign Software for Substations(Converter Stations).
文摘This article addresses the challenges associatedwith three-dimensional(3D)design models in the power industry,such as substantial data volume and complex calculations.Traditional data exchange methods pose difficulties in coordinating design efforts among project participants,and current computing systems cannot meet the performance demands of 3D power design.To overcome these challenges,this study thoroughly evaluates both public and private clouds and ultimately selects a hybrid cloud infrastructure for a custom-built 3D design platform that addresses the specific needs of the power industry.Employing optimization algorithms for 3D design models,we reduce model size and implement automated synchronization of platform data through code.We also propose different levels of cloud services and identify the necessary functionalities to be fulfilled by each layer of the 3D design cloud platform.Nested virtualization technology is used to create cloud desktops with different hardware requirements.Ultimately,we successfully establish a comprehensive framework for a 3D design platform specifically tailored to power design enterprises.
基金supported by the National Natural Science Foundation of China(Nos.61201209 and 61401499)the Natural Science Foundation of Shaanxi Province,China(No.2015JM6340)the Industrial Science and Technology Project of Shaanxi Province,China(No.2016GY-087)
文摘Software-defined networking(SDN) enables the network virtualization through SDN hypervisors to share the underlying physical SDN network among multiple logically isolated virtual SDN networks(v SDNs),each with its own controller.The v SDN embedding,which refers to mapping a number of v SDNs to the same substrate SDN network,is a key problem in the SDN virtualization environment.However,due to the distinctions of the SDN,such as the logically centralized controller and different virtualization technologies,most of the existing embedding algorithms cannot be applied directly to SDN virtualization.In this paper,we consider controller placement and virtual network embedding as a joint vS DN embedding problem,and formulate it into an integer linear programming with objectives of minimizing the embedding cost and the controller-to-switch delay for each v SDN.Moreover,we propose a novel online vS DN embedding algorithm called CO-v SDNE,which consists of a node mapping stage and a link mapping stage.In the node mapping stage,CO-vS DNE maps the controller and the virtual nodes to the substrate nodes on the basis of the controller-to-switch delay and takes into account the subsequent link mapping at the same time.In the link mapping stage,CO-v SDNE adopts the k-shortest path algorithm to map the virtual links.The evaluation results with simulation and Mininet emulation show that the proposed CO-v SDNE not only significantly increases the long-term revenue to the cost ratio and acceptance ratio while guaranteeing low average and maximum controller-to-switch delay,but also achieves good v SDN performance in terms of end-to-end delay and throughput.
文摘Storage class memory (SCM) has the potential to revolutionize the memory landscape by its non-volatile and byte-addressable properties. However, there is little published work about exploring its usage for modem virtualized cloud infrastructure. We propose SCM-vWrite, a novel architecture designed around SCM, to ease the performance interference of virtualized storage subsystem. Through a case study on a typical virtualized cloud system, we first describe why cur- rent writeback manners are not suitable for a virtualized en- vironment, then design and implement SCM-vWrite to im- prove this problem. We also use typical benchmarks and re- alistic workloads to evaluate its performance. Compared with the traditional method on a conventional architecture, the ex- perimental result shows that SCM-vWrite can coordinate the writeback flows more effectively among multiple co-located vip operating systems, achieving a better disk I/O perfor- mance without any loss of reliability.
基金The work was supported by the National Key Research and Development Program of China under Grant No.2018YFB1003604the National Natural Science Foundation of China under Grant Nos.61472008,61672053 and U1611461,Shenzhen Key Research Project under Grant No.JCYJ20170412150946024,the National Science Foundation of USA under Grant No.CSR-1618384,Beijing Technological Program under Grant No.Z181100008918015.
文摘With the rapid increase of memory consumption by applications running on cloud data centers,we need more efficient memory management in a virtualized environment.Exploiting huge pages becomes more critical for a virtual machine's performance when it runs large working set size programs.Programs with large working set sizes are more sensitive to memory allocation,which requires us to quickly adjust the virtual machine's memory to accommodate memory phase changes.It would be much more efficient if we could adjust virtual machines'memory at the granularity of huge pages.However,existing virtual machine memory reallocation techniques,such as ballooning,do not support huge pages.In addition,in order to drive effective memory reallocation,we need to predict the actual memory demand of a virtual machine.We find that traditional memory demand estimation methods designed for regular pages cannot be simply ported to a system adopting huge pages.How to adjust the memory of virtual machines timely and effectively according to the periodic change of memory demand is another challenge we face.This paper proposes a dynamic huge page based memory balancing system(HPMBS)for efficient memory management in a virtualized environment.We first rebuild the ballooning mechanism in order to dispatch memory in the granularity of huge pages.We then design and implement a huge page working set size estimation mechanism which can accurately estimate a virtual machine's memory demand in huge pages environments.Combining these two mechanisms,we finally use an algorithm based on dynamic programming to achieve dynamic memory balancing.Experiments show that our system saves memory and improves overall system performance with low overhead.