Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh...Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.展开更多
For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by ...For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.展开更多
The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is...The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.展开更多
Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial....Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigo...Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.展开更多
With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-e...With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-end services.In recent years,we have witnessed many research efforts that strive to optimize the performance of cloud computing system in this new era.This paper provides an overview of existing works on recent system performance optimization techniques and classify them based on their design focuses.We also identify open issues and challenges in this important research direction.展开更多
In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption an...In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.展开更多
The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud comp...The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud computing(CC)system and centralised control for data storage and processing in realtime operation and control.The integration of edge computing(EC)can effectively alleviate the pressure and conduct real-time processing while ensuring data security.This paper conducts an extensive review of the EC-CC computing system and its application to the smart grid,which will integrate a vast number of dispersed devices.It first comprehensively describes the relationship among CC,fog computing(FC),and EC to provide a theoretical basis for the differentiation.It then introduces the architecture of the EC-CC computing system in the smart grid,where the architecture consists of both hardware structure and software platforms,and key technologies are introduced to support functionalities.Thereafter,the application to the smart grid is discussed across the whole supply chain,including energy generation,transportation(transmission and distribution networks),and consumption.Finally,future research opportunities and challenges of EC-CC while being applied to the smart grid are outlined.This paper can inform future research and industrial exploitations of these new technologies to enable a highly efficient smart grid under decarbonisation,digitalisation,and decentralisation transitions.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Rapidly increasing capacities,decreasing costs,and improvements in computational power,storage,and communication technologies have led to the development of many applications that carry increasingly large amounts of ...Rapidly increasing capacities,decreasing costs,and improvements in computational power,storage,and communication technologies have led to the development of many applications that carry increasingly large amounts of traffic on the global networking infrastructure.Smart devices lead to emerging technologies and play a vital role in rapid evolution.Smart devices have become a primary 24/7 need in today’s information technology world and include a wide range of supporting processing-intensive applications.Extensive use of many applications on smart devices results in increasing complexity of mobile software applications and consumption of resources at a massive level,including smart device battery power,processor,and RAM,and hinders their normal operation.Appropriate resource utilization and energy efficiency are fundamental considerations for smart devices because limited resources are sporadic and make it more difficult for users to complete their tasks.In this study we propose the model of mobile energy augmentation using cloud computing(MEACC),a new framework to address the challenges of massive power consumption and inefficient resource utilization in smart devices.MEACC efficiently filters the applications to be executed on a smart device or offloaded to the cloud.Moreover,MEACC efficiently calculates the total execution cost on both the mobile and cloud sides including communication costs for any application to be offloaded.In addition,resources are monitored before making the decision to offload the application.MEACC is a promising model for load balancing and power consumption reduction in emerging mobile computing environments.展开更多
In mobile cloud computing,trust is a very important parameter in mobile cloud computing security because data storage and data processing are performed remotely in the cloud.Aiming at the security and trust management...In mobile cloud computing,trust is a very important parameter in mobile cloud computing security because data storage and data processing are performed remotely in the cloud.Aiming at the security and trust management of mobile agent system in mobile cloud computing environment,the Human Trust Mechanism(HTM)is used to study the subjective trust formation,trust propagation and trust evolution law,and the subjective trust dynamic management algorithm(MASTM)is proposed.Based on the interaction experience between the mobile agent and the execution host and the third-party recommendation information to collect the basic trust data,the public trust host selection algorithm is given.The isolated malicious host algorithm and the integrated trust degree calculation algorithm realize the function of selecting the trusted cluster and isolating the malicious host,so as to enhance the security interaction between the mobile agent and the host.Given algorithm simulation and verification were carried out to prove its feasibility and effectiveness.展开更多
Cloud computing can offer a very powerful, reliable, predictable and scalable computing infrastructure for the execution of MAS (multi-agent systems) implementing complex agent-based applications such when modelling...Cloud computing can offer a very powerful, reliable, predictable and scalable computing infrastructure for the execution of MAS (multi-agent systems) implementing complex agent-based applications such when modelling, simulation and real-time running of complex systems must be provided. Multi-agent systems appears as an adequate approach to current challenges in many areas. Between important qualities of MAS also belongs to, that they are open, interoperable, and heterogenous systems. The agent is active, a program entity, has its own ideas how to perform the tasks of the own agenda. Agents: perceive, behave "reasonably", act in the environment, communicate with other agents. Cloud infrastructures can offer an ideal platform where run MAS systems simulations, applications and real-time running because of its large amount of processing and memory resources that can be dynamically configured for executing large agent-based software at unprecedented scale. Cloud computing can help chemical and food companies drive operational excellence; meet growing and changing customer demands; accelerate new product innovation and ramp-to-volume manufacturing in key markets; reduce IT spending; manage and mitigate supply chain risks; and enable faster and more flexible delivery of new IT system. Production type of SOC (service-oriented computing) can be inspired by a "Cloud", for the production of "Cloud" offers an attractive and natural solutions in several computing trends such as delivery system over the Internet, use of utilities, flexibility, virtualization, a "grid" distributed computing, outsourcing, Web 2.0, etc.. Production of the "Cloud" is also considered as a new multidisciplinary field that includes "network" production, virtual manufacturing, agile manufacturing, and of course cloud computing. Examples of cloud computing and MAS applications in food and chemistry development and industry, proposition of using multi-agent systems in the control of batch processes, modified ACO (ant colony optimization) approach for the diversified service allocation and scheduling mechanism in cloud paradigma, examples of applications in a business area were studied in the paper.展开更多
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ...The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth...Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure.Additionally,local data processing demands substantial manpower and hardware investments.Data isolation across different healthcare institutions hinders crossinstitutional collaboration in clinics and research.In this work,we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing,6G bandwidth,edge computing,federated learning,and blockchain technology.This system is called Cloud-MRI,aiming at solving the problems of MRI data storage security,transmission speed,artificial intelligence(AI)algorithm maintenance,hardware upgrading,and collaborative work.The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data(ISMRMRD)format.Then,the data are uploaded to the cloud or edge nodes for fast image reconstruction,neural network training,and automatic analysis.Then,the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services.The Cloud-MRI system will save the raw imaging data,reduce the risk of data loss,facilitate inter-institutional medical collaboration,and finally improve diagnostic accuracy and work efficiency.展开更多
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a...Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.展开更多
Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrast...Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.展开更多
The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework t...The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.展开更多
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
文摘Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.
基金The National Natural Science Foundation of China(No.61202458,61403109)the Natural Science Foundation of Heilongjiang Province of China(No.F2017021)Harbin Science and Technology Innovation Research Funds(No.2016RAQXJ036)
文摘For the cloud computing system,combined wth the memory function and incomplete matching of the biological immune system,a formal modeling and analysis method of the cloud computing system survivability is proposed by analyzing the survival situation of critical cloud services.First,on the basis of the SAIR(susceptible,active,infected,recovered)model,the SEIRS(susceptible,exposed,infected,recovered,susceptible)model and the vulnerability diffusion model of the distributed virtual system,the evolution state of the virus is divided into six types,and then the diffusion rules of the virus in the service domain of the cloud computing system and the propagation rules between service domains are analyzee.Finally,on the basis of Bio-PEPA(biological-performance evaluation process algebra),the formalized modeling of the survivability evolution of critical cloud services is made,and the SLIRAS(susceptible,latent,infected,recovered,antidotal,susceptible)model is obtained.Based on the stochastic simulation and the ODEs(ordinary differential equations)simulation of the Bio-PEPA model,the sensitivity parameters of the model are analyzed from three aspects,namely,the virus propagation speed of inter-domain,recovery ability and memory ability.The results showthat the proposed model has high approximate fitting degree to the actual cloud computing system,and it can well reflect the survivable change of the system.
文摘The traditional collaborative filtering recommendation technology has some shortcomings in the large data environment. To solve this problem, a personalized recommendation method based on cloud computing technology is proposed. The large data set and recommendation computation are decomposed into parallel processing on multiple computers. A parallel recommendation engine based on Hadoop open source framework is established, and the effectiveness of the system is validated by learning recommendation on an English training platform. The experimental results show that the scalability of the recommender system can be greatly improved by using cloud computing technology to handle massive data in the cluster. On the basis of the comparison of traditional recommendation algorithms, combined with the advantages of cloud computing, a personalized recommendation system based on cloud computing is proposed.
基金supported by Beijing Natural Science Foundation (4174100)NSFC(61602054)the Fundamental Research Funds for the Central Universities
文摘Cloud computing is becoming an important solution for providing scalable computing resources via Internet. Because there are tens of thousands of nodes in data center, the probability of server failures is nontrivial. Therefore, it is a critical challenge to guarantee the service reliability. Fault-tolerance strategies, such as checkpoint, are commonly employed. Because of the failure of the edge switches, the checkpoint image may become inaccessible. Therefore, current checkpoint-based fault tolerance method cannot achieve the best effect. In this paper, we propose an optimal checkpoint method with edge switch failure-aware. The edge switch failure-aware checkpoint method includes two algorithms. The first algorithm employs the data center topology and communication characteristic for checkpoint image storage server selection. The second algorithm employs the checkpoint image storage characteristic as well as the data center topology to select the recovery server. Simulation experiments are performed to demonstrate the effectiveness of the proposed method.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
文摘Cloud computing systems play a vital role in national security. This paper describes a conceptual framework called dualsystem architecture for protecting computing environments. While attempting to be logical and rigorous, formalism method is avoided and this paper chooses algebra Communication Sequential Process.
基金sponsored by the National Natural Science Foundation of China (Grant No.61972247).
文摘With the demand of agile development and management,cloud applications today are moving towards a more fine-grained microservice paradigm,where smaller and simpler functioning parts are combined for providing end-to-end services.In recent years,we have witnessed many research efforts that strive to optimize the performance of cloud computing system in this new era.This paper provides an overview of existing works on recent system performance optimization techniques and classify them based on their design focuses.We also identify open issues and challenges in this important research direction.
基金supported by National Natural Science Foundation of China under Grant No.60873231Natural Science Foundation of Jiangsu Province under Grant No.BK2009426+1 种基金Major State Basic Research Development Program of China under Grant No.2011CB302903Key University Science Research Project of Jiangsu Province under Grant No.11KJA520002
文摘In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.
文摘The quantity and heterogeneity of intelligent energy generation and consumption terminals in the smart grid are increasing drastically over the years.These edge devices have created significant pressures on cloud computing(CC)system and centralised control for data storage and processing in realtime operation and control.The integration of edge computing(EC)can effectively alleviate the pressure and conduct real-time processing while ensuring data security.This paper conducts an extensive review of the EC-CC computing system and its application to the smart grid,which will integrate a vast number of dispersed devices.It first comprehensively describes the relationship among CC,fog computing(FC),and EC to provide a theoretical basis for the differentiation.It then introduces the architecture of the EC-CC computing system in the smart grid,where the architecture consists of both hardware structure and software platforms,and key technologies are introduced to support functionalities.Thereafter,the application to the smart grid is discussed across the whole supply chain,including energy generation,transportation(transmission and distribution networks),and consumption.Finally,future research opportunities and challenges of EC-CC while being applied to the smart grid are outlined.This paper can inform future research and industrial exploitations of these new technologies to enable a highly efficient smart grid under decarbonisation,digitalisation,and decentralisation transitions.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金Project supported by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia(No.D-154-611-1440)。
文摘Rapidly increasing capacities,decreasing costs,and improvements in computational power,storage,and communication technologies have led to the development of many applications that carry increasingly large amounts of traffic on the global networking infrastructure.Smart devices lead to emerging technologies and play a vital role in rapid evolution.Smart devices have become a primary 24/7 need in today’s information technology world and include a wide range of supporting processing-intensive applications.Extensive use of many applications on smart devices results in increasing complexity of mobile software applications and consumption of resources at a massive level,including smart device battery power,processor,and RAM,and hinders their normal operation.Appropriate resource utilization and energy efficiency are fundamental considerations for smart devices because limited resources are sporadic and make it more difficult for users to complete their tasks.In this study we propose the model of mobile energy augmentation using cloud computing(MEACC),a new framework to address the challenges of massive power consumption and inefficient resource utilization in smart devices.MEACC efficiently filters the applications to be executed on a smart device or offloaded to the cloud.Moreover,MEACC efficiently calculates the total execution cost on both the mobile and cloud sides including communication costs for any application to be offloaded.In addition,resources are monitored before making the decision to offload the application.MEACC is a promising model for load balancing and power consumption reduction in emerging mobile computing environments.
基金This work was supported by the National Natural Science Foundation of China(61772196,61472136)the Hunan Provincial Focus Social Science Fund(2016ZDB006)+2 种基金Hunan Provincial Social Science Achievement Review Committee results appraisal identification project(Xiang social assessment 2016JD05)Key Project of Hunan Provincial Social Science Achievement Review Committee(XSP 19ZD1005)The authors gratefully acknowledge the financial support provided by the Key Laboratory of Hunan Province for New Retail Virtual Reality Technology(2017TP1026).
文摘In mobile cloud computing,trust is a very important parameter in mobile cloud computing security because data storage and data processing are performed remotely in the cloud.Aiming at the security and trust management of mobile agent system in mobile cloud computing environment,the Human Trust Mechanism(HTM)is used to study the subjective trust formation,trust propagation and trust evolution law,and the subjective trust dynamic management algorithm(MASTM)is proposed.Based on the interaction experience between the mobile agent and the execution host and the third-party recommendation information to collect the basic trust data,the public trust host selection algorithm is given.The isolated malicious host algorithm and the integrated trust degree calculation algorithm realize the function of selecting the trusted cluster and isolating the malicious host,so as to enhance the security interaction between the mobile agent and the host.Given algorithm simulation and verification were carried out to prove its feasibility and effectiveness.
文摘Cloud computing can offer a very powerful, reliable, predictable and scalable computing infrastructure for the execution of MAS (multi-agent systems) implementing complex agent-based applications such when modelling, simulation and real-time running of complex systems must be provided. Multi-agent systems appears as an adequate approach to current challenges in many areas. Between important qualities of MAS also belongs to, that they are open, interoperable, and heterogenous systems. The agent is active, a program entity, has its own ideas how to perform the tasks of the own agenda. Agents: perceive, behave "reasonably", act in the environment, communicate with other agents. Cloud infrastructures can offer an ideal platform where run MAS systems simulations, applications and real-time running because of its large amount of processing and memory resources that can be dynamically configured for executing large agent-based software at unprecedented scale. Cloud computing can help chemical and food companies drive operational excellence; meet growing and changing customer demands; accelerate new product innovation and ramp-to-volume manufacturing in key markets; reduce IT spending; manage and mitigate supply chain risks; and enable faster and more flexible delivery of new IT system. Production type of SOC (service-oriented computing) can be inspired by a "Cloud", for the production of "Cloud" offers an attractive and natural solutions in several computing trends such as delivery system over the Internet, use of utilities, flexibility, virtualization, a "grid" distributed computing, outsourcing, Web 2.0, etc.. Production of the "Cloud" is also considered as a new multidisciplinary field that includes "network" production, virtual manufacturing, agile manufacturing, and of course cloud computing. Examples of cloud computing and MAS applications in food and chemistry development and industry, proposition of using multi-agent systems in the control of batch processes, modified ACO (ant colony optimization) approach for the diversified service allocation and scheduling mechanism in cloud paradigma, examples of applications in a business area were studied in the paper.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金supported by the National Natural Science Foundation of China(62122064,62331021,62371410)the Natural Science Foundation of Fujian Province of China(2023J02005 and 2021J011184)+1 种基金the President Fund of Xiamen University(20720220063)the Nanqiang Outstanding Talents Program of Xiamen University.
文摘Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure.Additionally,local data processing demands substantial manpower and hardware investments.Data isolation across different healthcare institutions hinders crossinstitutional collaboration in clinics and research.In this work,we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing,6G bandwidth,edge computing,federated learning,and blockchain technology.This system is called Cloud-MRI,aiming at solving the problems of MRI data storage security,transmission speed,artificial intelligence(AI)algorithm maintenance,hardware upgrading,and collaborative work.The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data(ISMRMRD)format.Then,the data are uploaded to the cloud or edge nodes for fast image reconstruction,neural network training,and automatic analysis.Then,the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services.The Cloud-MRI system will save the raw imaging data,reduce the risk of data loss,facilitate inter-institutional medical collaboration,and finally improve diagnostic accuracy and work efficiency.
文摘Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.
文摘Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.
文摘The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.