In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementat...The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.展开更多
Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clea...Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.展开更多
In recent years,the use of mobile devices such as smart phones,tablet PCs,etc.is rapidly increasing.In case of these mobile devices,the storage space is limited due to their characteristics.To make up for the limited ...In recent years,the use of mobile devices such as smart phones,tablet PCs,etc.is rapidly increasing.In case of these mobile devices,the storage space is limited due to their characteristics.To make up for the limited space of storage in mobile devices,several methods are being researched.Of these,cloud storage service(CSS),one of cloud computing services,is an efficient solution to compensate such limited storage space.CSS is a service of storing files to the storage and thus getting access to stored files through networks(Internet)at anytime,anywhere.As for the existing CSS,users store their personally important files in the cloud storage,not in their own computers.It may cause security problems such as the leaking of information from private files or the damaging to the information.Thus,we propose a cloud storage system which can solve the security problem of CSS for mobile devices using the personal computer.Our system is deigned to store and manage files through the direct communication between mobile devices and personal computer storages by using the software as a service(SaaS),one of computing services,instead of directly storing files into cloud storages.展开更多
Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the ne...Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the new environment.This paper stands on this point,suggests a computer forensic service framework which is based on security architecture of cloud computing and requirements needed by cloud computing environment.The framework introduces honey farm technique,and pays more attention on active forensics,which can improve case handling efficiency and reduce the cost.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach...The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth...Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure.Additionally,local data processing demands substantial manpower and hardware investments.Data isolation across different healthcare institutions hinders crossinstitutional collaboration in clinics and research.In this work,we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing,6G bandwidth,edge computing,federated learning,and blockchain technology.This system is called Cloud-MRI,aiming at solving the problems of MRI data storage security,transmission speed,artificial intelligence(AI)algorithm maintenance,hardware upgrading,and collaborative work.The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data(ISMRMRD)format.Then,the data are uploaded to the cloud or edge nodes for fast image reconstruction,neural network training,and automatic analysis.Then,the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services.The Cloud-MRI system will save the raw imaging data,reduce the risk of data loss,facilitate inter-institutional medical collaboration,and finally improve diagnostic accuracy and work efficiency.展开更多
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a...Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.展开更多
Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers...Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.展开更多
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh...Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.展开更多
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ...Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.展开更多
The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping ...The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping how education is delivered,accessed,and experienced.These technologies enable personalized learning,optimize teaching processes,and make educational resources more accessible to learners worldwide.This paper examines the integration of these technologies into smart education systems,highlighting their applications,benefits,and challenges,and exploring their potential to bridge gaps in educational equity and inclusivity.展开更多
Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrast...Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.展开更多
The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework t...The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.展开更多
The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This pa...The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This paper proposes an innovative approach to managing cloud resources through the integration of a genetic algorithm(GA)with a“double auction”method.This approach seeks to enhance security and efficiency by aligning buyers and sellers within an intelligent market framework.It guarantees equitable pricing while utilizing resources efficiently and optimizing advantages for all stakeholders.The GA functions as an intelligent search mechanism that identifies optimal combinations of bids from users and suppliers,addressing issues arising from the intricacies of cloud systems.Analyses proved that our method surpasses previous strategies,particularly in terms of price accuracy,speed,and the capacity to manage large-scale activities,critical factors for real-time cybersecurity systems,such as IDS.Our research integrates artificial intelligence-inspired evolutionary algorithms with market-driven methods to develop intelligent resource management systems that are secure,scalable,and adaptable to evolving risks,such as process innovation.展开更多
The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud d...The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.展开更多
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金funded by the National Council of Science,Technology,and Technological Innovation(CONCYTEC)the National Program of Scientific Research and Advanced Studies(PROCIENCIA)under the E041-2022-“Applied Research Projects”competition.Contract number:PE501078609-2022-PROCIENCIA.
文摘The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.
基金Pawel Lula’s participation in the research has been carried out as part of a research initiative financed by Ministry of Science and Higher Education within“Regional Initiative of Excellence”Programme for 2019-2022.Project no.:021/RID/2018/19.Total financing 11897131.40 PLN.The other authors received no specific funding for this study.
文摘Our primary research hypothesis stands on a simple idea:The evolution of top-rated publications on a particular theme depends heavily on the progress and maturity of related topics.And this even when there are no clear relations or some concepts appear to cease to exist and leave place for newer ones starting many years ago.We implemented our model based on Computer Science Ontology(CSO)and analyzed 44 years of publications.Then we derived the most important concepts related to Cloud Computing(CC)from the scientific collection offered by Clarivate Analytics.Our methodology includes data extraction using advanced web crawling techniques,data preparation,statistical data analysis,and graphical representations.We obtained related concepts after aggregating the scores using the Jaccard coefficient and CSO Ontology.Our article reveals the contribution of Cloud Computing topics in research papers in leading scientific journals and the relationships between the field of Cloud Computing and the interdependent subdivisions identified in the broader framework of Computer Science.
基金The MKE(The Ministry of Knowledge Economy),Korea,under the ITRC(Infor mation Technology Research Center)support programsupervised by the NIPA(National ITIndustry Promotion Agency)(NIPA-2012-H0301-12-2006)
文摘In recent years,the use of mobile devices such as smart phones,tablet PCs,etc.is rapidly increasing.In case of these mobile devices,the storage space is limited due to their characteristics.To make up for the limited space of storage in mobile devices,several methods are being researched.Of these,cloud storage service(CSS),one of cloud computing services,is an efficient solution to compensate such limited storage space.CSS is a service of storing files to the storage and thus getting access to stored files through networks(Internet)at anytime,anywhere.As for the existing CSS,users store their personally important files in the cloud storage,not in their own computers.It may cause security problems such as the leaking of information from private files or the damaging to the information.Thus,we propose a cloud storage system which can solve the security problem of CSS for mobile devices using the personal computer.Our system is deigned to store and manage files through the direct communication between mobile devices and personal computer storages by using the software as a service(SaaS),one of computing services,instead of directly storing files into cloud storages.
基金Sponsored by the National Social Science Found of China(Grant No.13CFX054)the Project of Humanities and Social Science of Chinese Ministry of Education(Grant No.11YJCZH175)
文摘Cloud computing is becoming the developing trend in the information field.It causes many transforms in the related fields.In order to adapt such changes,computer forensics is bound to improve and integrate into the new environment.This paper stands on this point,suggests a computer forensic service framework which is based on security architecture of cloud computing and requirements needed by cloud computing environment.The framework introduces honey farm technique,and pays more attention on active forensics,which can improve case handling efficiency and reduce the cost.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
文摘The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by the National Natural Science Foundation of China(62122064,62331021,62371410)the Natural Science Foundation of Fujian Province of China(2023J02005 and 2021J011184)+1 种基金the President Fund of Xiamen University(20720220063)the Nanqiang Outstanding Talents Program of Xiamen University.
文摘Magnetic resonance imaging(MRI)plays an important role in medical diagnosis,generating petabytes of image data annually in large hospitals.This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure.Additionally,local data processing demands substantial manpower and hardware investments.Data isolation across different healthcare institutions hinders crossinstitutional collaboration in clinics and research.In this work,we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing,6G bandwidth,edge computing,federated learning,and blockchain technology.This system is called Cloud-MRI,aiming at solving the problems of MRI data storage security,transmission speed,artificial intelligence(AI)algorithm maintenance,hardware upgrading,and collaborative work.The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data(ISMRMRD)format.Then,the data are uploaded to the cloud or edge nodes for fast image reconstruction,neural network training,and automatic analysis.Then,the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services.The Cloud-MRI system will save the raw imaging data,reduce the risk of data loss,facilitate inter-institutional medical collaboration,and finally improve diagnostic accuracy and work efficiency.
文摘Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.
基金supported by National Natural Science Foundation of China(No.62172436)Additionally,it is supported by Natural Science Foundation of Shaanxi Province(No.2023-JC-YB-584)Engineering University of PAP’s Funding for Scientific Research Innovation Team and Key Researcher(No.KYGG202011).
文摘Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.
文摘Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.
基金funded by Multimedia University(Ref:MMU/RMC/PostDoc/NEW/2024/9804).
文摘Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.
文摘The rapid advancement of technology has paved the way for innovative approaches to education.Artificial intelligence(AI),the Internet of Things(IoT),and cloud computing are three transformative technologies reshaping how education is delivered,accessed,and experienced.These technologies enable personalized learning,optimize teaching processes,and make educational resources more accessible to learners worldwide.This paper examines the integration of these technologies into smart education systems,highlighting their applications,benefits,and challenges,and exploring their potential to bridge gaps in educational equity and inclusivity.
文摘Cloud computing(CC) provides infrastructure,storage services,and applications to the users that should be secured by some procedures or policies.Security in the cloud environment becomes essential to safeguard infrastructure and user information from unauthorized access by implementing timely intrusion detection systems(IDS).Ensemble learning harnesses the collective power of multiple machine learning(ML) methods with feature selection(FS)process aids to progress the sturdiness and overall precision of intrusion detection.Therefore,this article presents a meta-heuristic feature selection by ensemble learning-based anomaly detection(MFS-ELAD)algorithm for the CC platforms.To realize this objective,the proposed approach utilizes a min-max standardization technique.Then,higher dimensionality features are decreased by Prairie Dogs Optimizer(PDO) algorithm.For the recognition procedure,the MFS-ELAD method emulates a group of 3 DL techniques such as sparse auto-encoder(SAE),stacked long short-term memory(SLSTM),and Elman neural network(ENN) algorithms.Eventually,the parameter fine-tuning of the DL algorithms occurs utilizing the sand cat swarm optimizer(SCSO) approach that helps in improving the recognition outcomes.The simulation examination of MFS-ELAD system on the CSE-CIC-IDS2018 dataset exhibits its promising performance across another method using a maximal precision of 99.71%.
文摘The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance.
文摘The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This paper proposes an innovative approach to managing cloud resources through the integration of a genetic algorithm(GA)with a“double auction”method.This approach seeks to enhance security and efficiency by aligning buyers and sellers within an intelligent market framework.It guarantees equitable pricing while utilizing resources efficiently and optimizing advantages for all stakeholders.The GA functions as an intelligent search mechanism that identifies optimal combinations of bids from users and suppliers,addressing issues arising from the intricacies of cloud systems.Analyses proved that our method surpasses previous strategies,particularly in terms of price accuracy,speed,and the capacity to manage large-scale activities,critical factors for real-time cybersecurity systems,such as IDS.Our research integrates artificial intelligence-inspired evolutionary algorithms with market-driven methods to develop intelligent resource management systems that are secure,scalable,and adaptable to evolving risks,such as process innovation.
文摘The cloud data centres evolved with an issue of energy management due to the constant increase in size,complexity and enormous consumption of energy.Energy management is a challenging issue that is critical in cloud data centres and an important concern of research for many researchers.In this paper,we proposed a cuckoo search(CS)-based optimisation technique for the virtual machine(VM)selection and a novel placement algorithm considering the different constraints.The energy consumption model and the simulation model have been implemented for the efficient selection of VM.The proposed model CSOA-VM not only lessens the violations at the service level agreement(SLA)level but also minimises the VM migrations.The proposed model also saves energy and the performance analysis shows that energy consumption obtained is 1.35 kWh,SLA violation is 9.2 and VM migration is about 268.Thus,there is an improvement in energy consumption of about 1.8%and a 2.1%improvement(reduction)in violations of SLA in comparison to existing techniques.