The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ...The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a...Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.展开更多
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh...Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.展开更多
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ...Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.展开更多
The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping ...The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping how education is delivered,accessed,and experienced.These technologies enable personalized learning,optimize teaching processes,and make educational resources more accessible to learners worldwide.This paper examines the integration of these technologies into smart education systems,highlighting their applications,benefits,and challenges,and exploring their potential to bridge gaps in educational equity and inclusivity.展开更多
Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrast...Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.展开更多
The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework t...The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.展开更多
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n...The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
In the field of cloud computing, topics such as computing resource virtualization, differences between grid and cloud computing, relationship between high-performance computers and cloud computing centers, and cloud s...In the field of cloud computing, topics such as computing resource virtualization, differences between grid and cloud computing, relationship between high-performance computers and cloud computing centers, and cloud security and standards have attracted much research interest. This paper analyzes these topics and highlights that resource virtualization allows information services to be scalable, intensive, and specialized; grid computing involves using many computers for large-scale computing tasks, while cloud computing uses one platform for multiple services; high-performance computers may not be suitable for a cloud computing; security in cloud computing focuses on trust management between service suppliers and users; and based on the existing standards, standardization of cloud computing should focus on interoperability between services.展开更多
Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobi...Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.展开更多
Cloud Computing Assisted Instruction shows incomparable advantages over the traditional language teaching, but meanwhile, it exists some major problems, for instance, the information technology is omnipotent, informat...Cloud Computing Assisted Instruction shows incomparable advantages over the traditional language teaching, but meanwhile, it exists some major problems, for instance, the information technology is omnipotent, information input is too excessive and teachers' role is considerably weakened. This article attempts to analyze the problems and promote language teaching reform base on Cloud Computing Assisted Instruction.展开更多
This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits ...This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits of fuzzy and firefly.It automatically adjusts its behavior or converges depending on the information gathered during the search process and objective function.It works for 3-tier architecture,including cloudlet and public cloud.As cloudlets have limited resources,fuzzy logic is used for cloudlet selection using capacity and waiting time as input.Fuzzy provides human-like decisions without using any mathematical model.Firefly is a powerful meta-heuristic optimization technique to balance diversification and solution speed.It balances the load on cloud and cloudlet while minimizing makespan and execution time.However,it may trap in local optimum;levy flight can handle it.Hybridization of fuzzy fireflywith levy flight is a novel technique that provides reduced makespan,execution time,and Degree of imbalance while balancing the load.Simulation has been carried out on the Cloud Analyst platform with National Aeronautics and Space Administration(NASA)and Clarknet datasets.Results show that the proposed algorithm outperforms Ant Colony Optimization Queue Decision Maker(ACOQDM),Distributed Scheduling Optimization Algorithm(DSOA),andUtility-based Firefly Algorithm(UFA)when compared in terms of makespan,Degree of imbalance,and Figure of Merit.展开更多
Healthcare is a fundamental part of every individual’s life.The healthcare industry is developing very rapidly with the help of advanced technologies.Many researchers are trying to build cloud-based healthcare applic...Healthcare is a fundamental part of every individual’s life.The healthcare industry is developing very rapidly with the help of advanced technologies.Many researchers are trying to build cloud-based healthcare applications that can be accessed by healthcare professionals from their premises,as well as by patients from their mobile devices through communication interfaces.These systems promote reliable and remote interactions between patients and healthcare professionals.However,there are several limitations to these innovative cloud computing-based systems,namely network availability,latency,battery life and resource availability.We propose a hybrid mobile cloud computing(HMCC)architecture to address these challenges.Furthermore,we also evaluate the performance of heuristic and dynamic machine learning based task scheduling and load balancing algorithms on our proposed architecture.We compare them,to identify the strengths and weaknesses of each algorithm;and provide their comparative results,to show latency and energy consumption performance.Challenging issues for cloudbased healthcare systems are discussed in detail.展开更多
In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption an...In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.展开更多
As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure ...As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure the security of cloud computing.But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing.In cloud computing environment,only when the security and reliability of both interaction parties are ensured,data security can be effectively guaranteed during interactions between users and the Cloud.Therefore,building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment.Combining with Trust Management(TM),a mutual trust based access control(MTBAC) model is proposed in this paper.MTBAC model take both user's behavior trust and cloud services node's credibility into consideration.Trust relationships between users and cloud service nodes are established by mutual trust mechanism.Security problems of access control are solved by implementing MTBAC model into cloud computing environment.Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.展开更多
Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the clou...Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.展开更多
With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dyn...With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtuMized resources are provided as services. With virtualization technology, cloud computing offers diverse services (such as virtual computing, virtual storage, virtual bandwidth, etc.) for the public by means of multi-tenancy mode. Although users are enjoying the capabilities of super-computing and mass storage supplied by cloud computing, cloud security still remains as a hot spot problem, which is in essence the trust management between data owners and storage service providers. In this paper, we propose a data coloring method based on cloud watermarking to recognize and ensure mutual reputations. The experimental results show that the robustness of reverse cloud generator can guarantee users' embedded social reputation identifications. Hence, our work provides a reference solution to the critical problem of cloud security.展开更多
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
文摘Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.
文摘Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.
基金funded by Multimedia University(Ref:MMU/RMC/PostDoc/NEW/2024/9804).
文摘Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.
文摘The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping how education is delivered,accessed,and experienced.These technologies enable personalized learning,optimize teaching processes,and make educational resources more accessible to learners worldwide.This paper examines the integration of these technologies into smart education systems,highlighting their applications,benefits,and challenges,and exploring their potential to bridge gaps in educational equity and inclusivity.
文摘Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.
文摘The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.
文摘The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
文摘In the field of cloud computing, topics such as computing resource virtualization, differences between grid and cloud computing, relationship between high-performance computers and cloud computing centers, and cloud security and standards have attracted much research interest. This paper analyzes these topics and highlights that resource virtualization allows information services to be scalable, intensive, and specialized; grid computing involves using many computers for large-scale computing tasks, while cloud computing uses one platform for multiple services; high-performance computers may not be suitable for a cloud computing; security in cloud computing focuses on trust management between service suppliers and users; and based on the existing standards, standardization of cloud computing should focus on interoperability between services.
基金supported by Hong Kong RGC under the GRF grant PolyU5106/10ENokia Research Lab (Beijing) under the grant H-ZG19+1 种基金supported by the National S&T Major Project of China under No.2009ZX03006-001Guangdong S&T Major Project under No.2009A080207002
文摘Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.
文摘Cloud Computing Assisted Instruction shows incomparable advantages over the traditional language teaching, but meanwhile, it exists some major problems, for instance, the information technology is omnipotent, information input is too excessive and teachers' role is considerably weakened. This article attempts to analyze the problems and promote language teaching reform base on Cloud Computing Assisted Instruction.
基金funded by University Grant Commission with UGC-Ref.No.:3364/(NET-JUNE 2015).
文摘This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits of fuzzy and firefly.It automatically adjusts its behavior or converges depending on the information gathered during the search process and objective function.It works for 3-tier architecture,including cloudlet and public cloud.As cloudlets have limited resources,fuzzy logic is used for cloudlet selection using capacity and waiting time as input.Fuzzy provides human-like decisions without using any mathematical model.Firefly is a powerful meta-heuristic optimization technique to balance diversification and solution speed.It balances the load on cloud and cloudlet while minimizing makespan and execution time.However,it may trap in local optimum;levy flight can handle it.Hybridization of fuzzy fireflywith levy flight is a novel technique that provides reduced makespan,execution time,and Degree of imbalance while balancing the load.Simulation has been carried out on the Cloud Analyst platform with National Aeronautics and Space Administration(NASA)and Clarknet datasets.Results show that the proposed algorithm outperforms Ant Colony Optimization Queue Decision Maker(ACOQDM),Distributed Scheduling Optimization Algorithm(DSOA),andUtility-based Firefly Algorithm(UFA)when compared in terms of makespan,Degree of imbalance,and Figure of Merit.
基金supported by the Bio and Medical Technology Development Program of the National Research Foundation(NRF)funded by the Korean government(MSIT)(No.NRF-2019M3E5D1A02069073)supported by the Soonchunhyang University Research Fund.
文摘Healthcare is a fundamental part of every individual’s life.The healthcare industry is developing very rapidly with the help of advanced technologies.Many researchers are trying to build cloud-based healthcare applications that can be accessed by healthcare professionals from their premises,as well as by patients from their mobile devices through communication interfaces.These systems promote reliable and remote interactions between patients and healthcare professionals.However,there are several limitations to these innovative cloud computing-based systems,namely network availability,latency,battery life and resource availability.We propose a hybrid mobile cloud computing(HMCC)architecture to address these challenges.Furthermore,we also evaluate the performance of heuristic and dynamic machine learning based task scheduling and load balancing algorithms on our proposed architecture.We compare them,to identify the strengths and weaknesses of each algorithm;and provide their comparative results,to show latency and energy consumption performance.Challenging issues for cloudbased healthcare systems are discussed in detail.
基金supported by National Natural Science Foundation of China under Grant No.60873231Natural Science Foundation of Jiangsu Province under Grant No.BK2009426+1 种基金Major State Basic Research Development Program of China under Grant No.2011CB302903Key University Science Research Project of Jiangsu Province under Grant No.11KJA520002
文摘In most existing CP-ABE schemes, there is only one authority in the system and all the public keys and private keys are issued by this authority, which incurs ciphertext size and computation costs in the encryption and decryption operations that depend at least linearly on the number of attributes involved in the access policy. We propose an efficient multi-authority CP-ABE scheme in which the authorities need not interact to generate public information during the system initialization phase. Our scheme has constant ciphertext length and a constant number of pairing computations. Our scheme can be proven CPA-secure in random oracle model under the decision q-BDHE assumption. When user's attributes revocation occurs, the scheme transfers most re-encryption work to the cloud service provider, reducing the data owner's computational cost on the premise of security. Finally the analysis and simulation result show that the schemes proposed in this thesis ensure the privacy and secure access of sensitive data stored in the cloud server, and be able to cope with the dynamic changes of users' access privileges in large-scale systems. Besides, the multi-authority ABE eliminates the key escrow problem, achieves the length of ciphertext optimization and enhances the effi ciency of the encryption and decryption operations.
基金ACKNOWLEDGEMENT This paper is supported by the Opening Project of State Key Laboratory for Novel Software Technology of Nanjing University, China (Grant No.KFKT2012B25) and National Science Foundation of China (Grant No.61303263).
文摘As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure the security of cloud computing.But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing.In cloud computing environment,only when the security and reliability of both interaction parties are ensured,data security can be effectively guaranteed during interactions between users and the Cloud.Therefore,building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment.Combining with Trust Management(TM),a mutual trust based access control(MTBAC) model is proposed in this paper.MTBAC model take both user's behavior trust and cloud services node's credibility into consideration.Trust relationships between users and cloud service nodes are established by mutual trust mechanism.Security problems of access control are solved by implementing MTBAC model into cloud computing environment.Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
基金This work was supported by the National Natural Science Foundation of China(No.61702276)the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology under Grant 2016r055 and the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.The authors are grateful for the anonymous reviewers who made constructive comments and improvements.
文摘Advanced cloud computing technology provides cost saving and flexibility of services for users.With the explosion of multimedia data,more and more data owners would outsource their personal multimedia data on the cloud.In the meantime,some computationally expensive tasks are also undertaken by cloud servers.However,the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data.Recently,this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data.In this paper,two reversible data hiding schemes are proposed for encrypted image data in cloud computing:reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain.The former is that additional bits are extracted after decryption and the latter is that extracted before decryption.Meanwhile,a combined scheme is also designed.This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing,which not only ensures multimedia data security without relying on the trustworthiness of cloud servers,but also guarantees that reversible data hiding can be operated over encrypted images at the different stages.Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme.The computation cost of the proposed scheme is acceptable and adjusts to different security levels.
基金supported by National Basic Research Program of China (973 Program) (No. 2007CB310800)China Postdoctoral Science Foundation (No. 20090460107 and No. 201003794)
文摘With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtuMized resources are provided as services. With virtualization technology, cloud computing offers diverse services (such as virtual computing, virtual storage, virtual bandwidth, etc.) for the public by means of multi-tenancy mode. Although users are enjoying the capabilities of super-computing and mass storage supplied by cloud computing, cloud security still remains as a hot spot problem, which is in essence the trust management between data owners and storage service providers. In this paper, we propose a data coloring method based on cloud watermarking to recognize and ensure mutual reputations. The experimental results show that the robustness of reverse cloud generator can guarantee users' embedded social reputation identifications. Hence, our work provides a reference solution to the critical problem of cloud security.