期刊文献+
共找到694篇文章
< 1 2 35 >
每页显示 20 50 100
Secure and Efficient Outsourced Computation in Cloud Computing Environments
1
作者 Varun Dixit Davinderjit Kaur 《Journal of Software Engineering and Applications》 2024年第9期750-762,共13页
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo... Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency. 展开更多
关键词 Secure computation cloud Computing Homomorphic Encryption Secure Multiparty computation Resource Optimization
在线阅读 下载PDF
CBBM-WARM:A Workload-Aware Meta-Heuristic for Resource Management in Cloud Computing 被引量:1
2
作者 K Nivitha P Pabitha R Praveen 《China Communications》 2025年第6期255-275,共21页
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi... The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks. 展开更多
关键词 autonomic resource management cloud computing coot bird behavior model SLA violation cost WORKLOAD
在线阅读 下载PDF
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
3
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(XAI)
在线阅读 下载PDF
An Algorithm for Cloud-based Web Service Combination Optimization Through Plant Growth Simulation
4
作者 Li Qiang Qin Huawei +1 位作者 Qiao Bingqin Wu Ruifang 《系统仿真学报》 北大核心 2025年第2期462-473,共12页
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base... In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm. 展开更多
关键词 cloud-based service scheduling algorithm resource constraint load optimization cloud computing plant growth simulation algorithm
原文传递
Dynamic Multi-Objective Gannet Optimization(DMGO):An Adaptive Algorithm for Efficient Data Replication in Cloud Systems
5
作者 P.William Ved Prakash Mishra +3 位作者 Osamah Ibrahim Khalaf Arvind Mukundan Yogeesh N Riya Karmakar 《Computers, Materials & Continua》 2025年第9期5133-5156,共24页
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat... Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance. 展开更多
关键词 cloud computing data replication dynamic optimization multi-objective optimization gannet optimization algorithm adaptive algorithms resource efficiency SCALABILITY latency reduction energy-efficient computing
在线阅读 下载PDF
Cloud-magnetic resonance imaging system:In the era of 6G and artificial intelligence
6
作者 Yirong Zhou Yanhuang Wu +6 位作者 Yuhan Su Jing Li Jianyun Cai Yongfu You Jianjun Zhou Di Guo Xiaobo Qu 《Magnetic Resonance Letters》 2025年第1期52-63,共12页
Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth... Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure.Additionally,local data processing demands substantial manpower and hardware investments.Data isolation across different healthcare institutions hinders crossinstitutional collaboration in clinics and research.In this work,we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing,6G bandwidth,edge computing,federated learning,and blockchain technology.This system is called Cloud-MRI,aiming at solving the problems of MRI data storage security,transmission speed,artificial intelligence(AI)algorithm maintenance,hardware upgrading,and collaborative work.The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data(ISMRMRD)format.Then,the data are uploaded to the cloud or edge nodes for fast image reconstruction,neural network training,and automatic analysis.Then,the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services.The Cloud-MRI system will save the raw imaging data,reduce the risk of data loss,facilitate inter-institutional medical collaboration,and finally improve diagnostic accuracy and work efficiency. 展开更多
关键词 Magnetic resonance imaging cloud computing 6G bandwidth Artificial intelligence Edge computing Federated learning Blockchain
在线阅读 下载PDF
A Dynamic Workload Prediction and Distribution in Cloud Computing Using Deep Reinforcement Learning and LSTM
7
作者 Nampally Vijay Kumar Satarupa Mohanty Prasant Kumar Pattnaik 《Journal of Harbin Institute of Technology(New Series)》 2025年第4期64-71,共8页
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a... Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads. 展开更多
关键词 DRL LSTM cloud computing load balancing Q-LEARNING
在线阅读 下载PDF
An Efficient and Secure Data Audit Scheme for Cloud-Based EHRs with Recoverable and Batch Auditing
8
作者 Yuanhang Zhang Xu An Wang +3 位作者 Weiwei Jiang Mingyu Zhou Xiaoxuan Xu Hao Liu 《Computers, Materials & Continua》 2025年第4期1533-1553,共21页
Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers... Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing. 展开更多
关键词 SECURITY cloud computing cloud storage recoverable batch auditing
在线阅读 下载PDF
SDVformer:A Resource Prediction Method for Cloud Computing Systems
9
作者 Shui Liu Ke Xiong +3 位作者 Yeshen Li Zhifei Zhang Yu Zhang Pingyi Fan 《Computers, Materials & Continua》 2025年第9期5077-5093,共17页
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh... Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems. 展开更多
关键词 cloud computing time series prediction DVSA SG filter T-MOE
暂未订购
Modified Neural Network Used for Host Utilization Predication in Cloud Computing Environment
10
作者 Arif Ullah Siti Fatimah Abdul Razak +1 位作者 Sumendra Yogarayan Md Shohel Sayeed 《Computers, Materials & Continua》 2025年第3期5185-5204,共20页
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ... Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets. 展开更多
关键词 cloud computing DATACENTER virtual machine(VM) PREDICATION algorithm
在线阅读 下载PDF
Enhancing Anomaly Detection in Cloud Computing Through Metaheuristics Feature Selection with Ensemble Learning Approach
11
作者 Jansi Sophia Mary C Mahalakshmi K 《China Communications》 2025年第8期168-182,共15页
Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrast... Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%. 展开更多
关键词 anomaly detection cloud computing ensemble learning intrusion detection system prairie dogs optimization
在线阅读 下载PDF
Revolutionizing Learning:The Role of AI,IoT,and Cloud Computing in Smart Education
12
作者 Chiweng Leng 《Journal of Contemporary Educational Research》 2025年第6期12-17,共6页
The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping ... The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping how education is delivered,accessed,and experienced.These technologies enable personalized learning,optimize teaching processes,and make educational resources more accessible to learners worldwide.This paper examines the integration of these technologies into smart education systems,highlighting their applications,benefits,and challenges,and exploring their potential to bridge gaps in educational equity and inclusivity. 展开更多
关键词 Artificial intelligence Internet of Things cloud computing Smart education Personalized learning
在线阅读 下载PDF
Energy Efficient and Resource Allocation in Cloud Computing Using QT-DNN and Binary Bird Swarm Optimization
13
作者 Puneet Sharma Dhirendra Prasad Yadav +2 位作者 Bhisham Sharma Surbhi B.Khan Ahlam Almusharraf 《Computers, Materials & Continua》 2025年第10期2179-2193,共15页
The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework t... The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance. 展开更多
关键词 cloud computing quality of service virtual machine ALLOCATION deep neural network
在线阅读 下载PDF
Hybrid Spotted Hyena and Whale Optimization Algorithm-Based Dynamic Load Balancing Technique for Cloud Computing Environment
14
作者 N Jagadish Kumar R Praveen +1 位作者 D Selvaraj D Dhinakaran 《China Communications》 2025年第8期206-227,共22页
The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is n... The uncertain nature of mapping user tasks to Virtual Machines(VMs) causes system failure or execution delay in Cloud Computing.To maximize cloud resource throughput and decrease user response time,load balancing is needed.Possible load balancing is needed to overcome user task execution delay and system failure.Most swarm intelligent dynamic load balancing solutions that used hybrid metaheuristic algorithms failed to balance exploitation and exploration.Most load balancing methods were insufficient to handle the growing uncertainty in job distribution to VMs.Thus,the Hybrid Spotted Hyena and Whale Optimization Algorithm-based Dynamic Load Balancing Mechanism(HSHWOA) partitions traffic among numerous VMs or servers to guarantee user chores are completed quickly.This load balancing approach improved performance by considering average network latency,dependability,and throughput.This hybridization of SHOA and WOA aims to improve the trade-off between exploration and exploitation,assign jobs to VMs with more solution diversity,and prevent the solution from reaching a local optimality.Pysim-based experimental verification and testing for the proposed HSHWOA showed a 12.38% improvement in minimized makespan,16.21% increase in mean throughput,and 14.84% increase in network stability compared to baseline load balancing strategies like Fractional Improved Whale Social Optimization Based VM Migration Strategy FIWSOA,HDWOA,and Binary Bird Swap. 展开更多
关键词 cloud computing load balancing Spotted Hyena Optimization Algorithm(SHOA) THROUGHPUT Virtual Machines(VMs) Whale Optimization Algorithm(WOA)
在线阅读 下载PDF
Energy Efficient VM Selection Using CSOA-VM Model in Cloud Data Centers
15
作者 Mandeep Singh Devgan Tajinder Kumar +3 位作者 Purushottam Sharma Xiaochun Cheng Shashi Bhushan Vishal Garg 《CAAI Transactions on Intelligence Technology》 2025年第4期1217-1234,共18页
The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud d... The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques. 展开更多
关键词 cloud computing cloud datacenter energy consumption VM selection
在线阅读 下载PDF
Innovative Approaches to Task Scheduling in Cloud Computing Environments Using an Advanced Willow Catkin Optimization Algorithm
16
作者 Jeng-Shyang Pan Na Yu +3 位作者 Shu-Chuan Chu An-Ning Zhang Bin Yan Junzo Watada 《Computers, Materials & Continua》 2025年第2期2495-2520,共26页
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource... The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment. 展开更多
关键词 Willow catkin optimization algorithm cloud computing task scheduling opposition-based learning strategy
在线阅读 下载PDF
Consensus⁃Based Cryptographic Framework for Side⁃Channel Attack Resilience in Cloud Environments
17
作者 I.Nasurulla K.Hemalatha +1 位作者 P.Ramachandran S.Parvathi 《Journal of Harbin Institute of Technology(New Series)》 2025年第2期90-104,共15页
Cloud environments are essential for modern computing,but are increasingly vulnerable to Side-Channel Attacks(SCAs),which exploit indirect information to compromise sensitive data.To address this critical challenge,we... Cloud environments are essential for modern computing,but are increasingly vulnerable to Side-Channel Attacks(SCAs),which exploit indirect information to compromise sensitive data.To address this critical challenge,we propose SecureCons Framework(SCF),a novel consensus-based cryptographic framework designed to enhance resilience against SCAs in cloud environments.SCF integrates a dual-layer approach combining lightweight cryptographic algorithms with a blockchain-inspired consensus mechanism to secure data exchanges and thwart potential side-channel exploits.The framework includes adaptive anomaly detection models,cryptographic obfuscation techniques,and real-time monitoring to identify and mitigate vulnerabilities proactively.Experimental evaluations demonstrate the framework's robustness,achieving over 95%resilience against advanced SCAs with minimal computational overhead.SCF provides a scalable,secure,and efficient solution,setting a new benchmark for side-channel attack mitigation in cloud ecosystems. 展开更多
关键词 cloud computing side channel attacks HAVAL cryptographic hash Wilcoxon signed⁃rank test consensus mechanism improved schmidt⁃samoa cryptography
在线阅读 下载PDF
Computation Partitioning in Mobile Cloud Computing: A Survey 被引量:1
18
作者 Lei Yang Jiannong Cao 《ZTE Communications》 2013年第4期8-17,共10页
Mobile devices are increasingly interacting with clouds,and mobile cloud computing has emerged as a new paradigm.An central topic in mobile cloud computing is computation partitioning,which involves partitioning the e... Mobile devices are increasingly interacting with clouds,and mobile cloud computing has emerged as a new paradigm.An central topic in mobile cloud computing is computation partitioning,which involves partitioning the execution of applications between the mobile side and cloud side so that execution cost is minimized.This paper discusses computation partitioning in mobile cloud computing.We first present the background and system models of mobile cloud computation partitioning systems.We then describe and compare state-of-the-art mobile computation partitioning in terms of application modeling,profiling,optimization,and implementation.We point out the main research issues and directions and summarize our own works. 展开更多
关键词 mobile cloud computing offloading computation partitioning
在线阅读 下载PDF
Representing Increasing Virtual Machine Security Strategy in Cloud Computing Computations 被引量:1
19
作者 Mohammad Shirzadi 《Electrical Science & Engineering》 2021年第2期7-16,共10页
This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and... This paper proposes algorithm for Increasing Virtual Machine Security Strategy in Cloud Computing computations.Imbalance between load and energy has been one of the disadvantages of old methods in providing server and hosting,so that if two virtual severs be active on a host and energy load be more on a host,it would allocated the energy of other hosts(virtual host)to itself to stay steady and this option usually leads to hardware overflow errors and users dissatisfaction.This problem has been removed in methods based on cloud processing but not perfectly,therefore,providing an algorithm not only will implement a suitable security background but also it will suitably divide energy consumption and load balancing among virtual severs.The proposed algorithm is compared with several previously proposed Security Strategy including SC-PSSF,PSSF and DEEAC.Comparisons show that the proposed method offers high performance computing,efficiency and consumes lower energy in the network. 展开更多
关键词 cloud computing High performance computing AUTOMATION Security SERVER
在线阅读 下载PDF
MCWOA Scheduler:Modified Chimp-Whale Optimization Algorithm for Task Scheduling in Cloud Computing 被引量:1
20
作者 Chirag Chandrashekar Pradeep Krishnadoss +1 位作者 Vijayakumar Kedalu Poornachary Balasundaram Ananthakrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2593-2616,共24页
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ... Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO). 展开更多
关键词 cloud computing SCHEDULING chimp optimization algorithm whale optimization algorithm
在线阅读 下载PDF
上一页 1 2 35 下一页 到第
使用帮助 返回顶部