Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays an...Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays and has many research potential in various areas like resource allocation,task scheduling,security,privacy,etc.To improve system performance,an efficient task-scheduling algorithm is required.Existing task-scheduling algorithms focus on task-resource requirements,CPU memory,execution time,and execution cost.In this paper,a task scheduling algorithm based on a Genetic Algorithm(GA)has been presented for assigning and executing different tasks.The proposed algorithm aims to minimize both the completion time and execution cost of tasks and maximize resource utilization.We evaluate our algorithm’s performance by applying it to two examples with a different number of tasks and processors.The first example contains ten tasks and four processors;the computation costs are generated randomly.The last example has eight processors,and the number of tasks ranges from twenty to seventy;the computation cost of each task on different processors is generated randomly.The achieved results show that the proposed approach significantly succeeded in finding the optimal solutions for the three objectives;completion time,execution cost,and resource utilization.展开更多
Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storag...Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storage and access,our proposed work designs a Novel Quantum Key Distribution(QKD)relying upon a non-commutative encryption framework.It makes use of a Novel Quantum Key Distribution approach,which guarantees high level secured data transmission.Along with this,a shared secret is generated using Diffie Hellman(DH)to certify secured key generation at reduced time complexity.Moreover,a non-commutative approach is used,which effectively allows the users to store and access the encrypted data into the cloud server.Also,to prevent data loss or corruption caused by the insiders in the cloud,Optimized Genetic Algorithm(OGA)is utilized,which effectively recovers the data and retrieve it if the missed data without loss.It is then followed with the decryption process as if requested by the user.Thus our proposed framework ensures authentication and paves way for secure data access,with enhanced performance and reduced complexities experienced with the prior works.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA ...Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA to the cloud server.However,the algorithm user would never want his data to be disclosed to cloud server.Thus,it is necessary for the user to encrypt the data before transmitting them to the server.But the user will encounter a new problem.The arithmetic operations we are familiar with cannot work directly in the ciphertext domain.In this paper,a privacy-preserving outsourced genetic algorithm is proposed.The user’s data are protected by homomorphic encryption algorithm which can support the operations in the encrypted domain.GA is elaborately adapted to search the optimal result over the encrypted data.The security analysis and experiment results demonstrate the effectiveness of the proposed scheme.展开更多
For massive order allocation problem of the third party logistics (TPL) in ecommerce, this paper proposes a general order allocation model based on cloud architecture and hybrid genetic algorithm (GA), implementin...For massive order allocation problem of the third party logistics (TPL) in ecommerce, this paper proposes a general order allocation model based on cloud architecture and hybrid genetic algorithm (GA), implementing cloud deployable MapReduce (MR) code to parallelize allocation process, using heuristic rule to fix illegal chromosome during encoding process and adopting mixed integer programming (MIP) as fitness flmction to guarantee rationality of chromosome fitness. The simulation experiment shows that in mass processing of orders, the model performance in a multi-server cluster environment is remarkable superior to that in stand-alone environment. This model can be directly applied to cloud based logistics information platform (LIP) in near future, implementing fast auto-allocation for massive concurrent orders, with great application value.展开更多
In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completi...In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completion time,a mathematical model of resource scheduling in cloud data center is established.The two-stage resource scheduling optimization simulation is realized by using the conventional genetic algorithm.On the technology of the conventional genetic algorithm,an adaptive transformation operator is designed to improve the crossover and mutation of the genetic algorithm.The experimental results show that the improved genetic algorithm can significantly reduce the total completion time of the task,and has good convergence and global optimization ability.展开更多
In this paper, we proposed a campus equipment ubiquitous-management system which is based on a genetic algorithm approach in cloud server. The system uses radio frequency identification (RFID) to monitor the status ...In this paper, we proposed a campus equipment ubiquitous-management system which is based on a genetic algorithm approach in cloud server. The system uses radio frequency identification (RFID) to monitor the status of equipment in real time, and uses wire or wireless network to send real-time situation to display on manager's PC or PDA. In addition, the system will also synchronize with database to record and reserve message. Furthermore, the status will display not only to a single manager but also a number of managers. In order to increase efficiency between graphical user interface (GUI) and database, the system adopts SqlDependency object of ADO.NET so that any changed situation of the database could be known immediately and synchronized with manager's PC or PDA. Because the problem of the equipment utilization is an NP-complete (non-deterministic polynomial) problem, we apply genetic algorithm to enhance the efficiency of finding optimum solution for equipment utilization. We assign constraints into the system, and the system will post back the optimum solution simultaneously on the screen. As a consequence, we compare our genetic algorithm based approach (GA) with the simulated annealing based approach (SA) for maximizing the equipment utilization. Experimental result shows that our GA approach achieves an average 79.66% improvement in equipment utilization in an acceptable run time.展开更多
In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT ...In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.展开更多
With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data proc...With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data processing. In distributed cloud computing systems, data intensive computing can lead to data scheduling between data centers. Reasonable data placement can reduce data scheduling between the data centers effectively, and improve the data acquisition efficiency of users. In this paper, the mathematical model of data scheduling between data centers is built. By means of the global optimization ability of the genetic algorithm, generational evolution produces better approximate solution, and gets the best approximation of the data placement at last. The experimental results show that genetic algorithm can effectively work out the approximate optimal data placement, and minimize data scheduling between data centers.展开更多
The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This pa...The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This paper proposes an innovative approach to managing cloud resources through the integration of a genetic algorithm(GA)with a“double auction”method.This approach seeks to enhance security and efficiency by aligning buyers and sellers within an intelligent market framework.It guarantees equitable pricing while utilizing resources efficiently and optimizing advantages for all stakeholders.The GA functions as an intelligent search mechanism that identifies optimal combinations of bids from users and suppliers,addressing issues arising from the intricacies of cloud systems.Analyses proved that our method surpasses previous strategies,particularly in terms of price accuracy,speed,and the capacity to manage large-scale activities,critical factors for real-time cybersecurity systems,such as IDS.Our research integrates artificial intelligence-inspired evolutionary algorithms with market-driven methods to develop intelligent resource management systems that are secure,scalable,and adaptable to evolving risks,such as process innovation.展开更多
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ...The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing.展开更多
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How...Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.展开更多
The Equilibrium Optimiser(EO)has been demonstrated to be one of the metaheuristic algorithms that can effectively solve global optimisation problems.Balancing the paradox between exploration and exploitation operation...The Equilibrium Optimiser(EO)has been demonstrated to be one of the metaheuristic algorithms that can effectively solve global optimisation problems.Balancing the paradox between exploration and exploitation operations while enhancing the ability to jump out of the local optimum are two key points to be addressed in EO research.To alleviate these limitations,an EO variant named adaptive elite-guided Equilibrium Optimiser(AEEO)is introduced.Specifically,the adaptive elite-guided search mechanism enhances the balance between exploration and exploitation.The modified mutualism phase reinforces the information interaction among particles and local optima avoidance.The cooperation of these two mechanisms boosts the overall performance of the basic EO.The AEEO is subjected to competitive experiments with state-of-the-art algorithms and modified algorithms on 23 classical benchmark functions and IEE CEC 2017 function test suite.Experimental results demonstrate that AEEO outperforms several well-performing EO variants,DE variants,PSO variants,SSA variants,and GWO variants in terms of convergence speed and accuracy.In addition,the AEEO algorithm is used for the edge server(ES)placement problem in mobile edge computing(MEC)environments.The experimental results show that the author’s approach outperforms the representative approaches compared in terms of access latency and deployment cost.展开更多
文摘Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays and has many research potential in various areas like resource allocation,task scheduling,security,privacy,etc.To improve system performance,an efficient task-scheduling algorithm is required.Existing task-scheduling algorithms focus on task-resource requirements,CPU memory,execution time,and execution cost.In this paper,a task scheduling algorithm based on a Genetic Algorithm(GA)has been presented for assigning and executing different tasks.The proposed algorithm aims to minimize both the completion time and execution cost of tasks and maximize resource utilization.We evaluate our algorithm’s performance by applying it to two examples with a different number of tasks and processors.The first example contains ten tasks and four processors;the computation costs are generated randomly.The last example has eight processors,and the number of tasks ranges from twenty to seventy;the computation cost of each task on different processors is generated randomly.The achieved results show that the proposed approach significantly succeeded in finding the optimal solutions for the three objectives;completion time,execution cost,and resource utilization.
文摘Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storage and access,our proposed work designs a Novel Quantum Key Distribution(QKD)relying upon a non-commutative encryption framework.It makes use of a Novel Quantum Key Distribution approach,which guarantees high level secured data transmission.Along with this,a shared secret is generated using Diffie Hellman(DH)to certify secured key generation at reduced time complexity.Moreover,a non-commutative approach is used,which effectively allows the users to store and access the encrypted data into the cloud server.Also,to prevent data loss or corruption caused by the insiders in the cloud,Optimized Genetic Algorithm(OGA)is utilized,which effectively recovers the data and retrieve it if the missed data without loss.It is then followed with the decryption process as if requested by the user.Thus our proposed framework ensures authentication and paves way for secure data access,with enhanced performance and reduced complexities experienced with the prior works.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金This work is supported by the NSFC(61672294,61601236,U1536206,61502242,61572258,U1405254,61373133,61373132,61232016)BK20150925,Six peak talent project of Jiangsu Province(R2016L13),NRF-2016R1D1A1B03933294,CICAEET,and PAPD fund.
文摘Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA to the cloud server.However,the algorithm user would never want his data to be disclosed to cloud server.Thus,it is necessary for the user to encrypt the data before transmitting them to the server.But the user will encounter a new problem.The arithmetic operations we are familiar with cannot work directly in the ciphertext domain.In this paper,a privacy-preserving outsourced genetic algorithm is proposed.The user’s data are protected by homomorphic encryption algorithm which can support the operations in the encrypted domain.GA is elaborately adapted to search the optimal result over the encrypted data.The security analysis and experiment results demonstrate the effectiveness of the proposed scheme.
基金Foundation item: the National Science & Technology Pillar Program (Nos. 2011BAH21B02 and 2011BAH21B03) and the Chengdu Major Scientific and Technological Achievements (No. 11zHzD038)
文摘For massive order allocation problem of the third party logistics (TPL) in ecommerce, this paper proposes a general order allocation model based on cloud architecture and hybrid genetic algorithm (GA), implementing cloud deployable MapReduce (MR) code to parallelize allocation process, using heuristic rule to fix illegal chromosome during encoding process and adopting mixed integer programming (MIP) as fitness flmction to guarantee rationality of chromosome fitness. The simulation experiment shows that in mass processing of orders, the model performance in a multi-server cluster environment is remarkable superior to that in stand-alone environment. This model can be directly applied to cloud based logistics information platform (LIP) in near future, implementing fast auto-allocation for massive concurrent orders, with great application value.
基金National Natural Science Foundation of China(61473216)Shaanxi Provincial Fund(2015JM6337)。
文摘In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completion time,a mathematical model of resource scheduling in cloud data center is established.The two-stage resource scheduling optimization simulation is realized by using the conventional genetic algorithm.On the technology of the conventional genetic algorithm,an adaptive transformation operator is designed to improve the crossover and mutation of the genetic algorithm.The experimental results show that the improved genetic algorithm can significantly reduce the total completion time of the task,and has good convergence and global optimization ability.
文摘In this paper, we proposed a campus equipment ubiquitous-management system which is based on a genetic algorithm approach in cloud server. The system uses radio frequency identification (RFID) to monitor the status of equipment in real time, and uses wire or wireless network to send real-time situation to display on manager's PC or PDA. In addition, the system will also synchronize with database to record and reserve message. Furthermore, the status will display not only to a single manager but also a number of managers. In order to increase efficiency between graphical user interface (GUI) and database, the system adopts SqlDependency object of ADO.NET so that any changed situation of the database could be known immediately and synchronized with manager's PC or PDA. Because the problem of the equipment utilization is an NP-complete (non-deterministic polynomial) problem, we apply genetic algorithm to enhance the efficiency of finding optimum solution for equipment utilization. We assign constraints into the system, and the system will post back the optimum solution simultaneously on the screen. As a consequence, we compare our genetic algorithm based approach (GA) with the simulated annealing based approach (SA) for maximizing the equipment utilization. Experimental result shows that our GA approach achieves an average 79.66% improvement in equipment utilization in an acceptable run time.
基金supported by the Research Management Center,Xiamen University Malaysia under XMUM Research Program Cycle 3(Grant No:XMUMRF/2019-C3/IECE/0006).
文摘In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.
文摘With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data processing. In distributed cloud computing systems, data intensive computing can lead to data scheduling between data centers. Reasonable data placement can reduce data scheduling between the data centers effectively, and improve the data acquisition efficiency of users. In this paper, the mathematical model of data scheduling between data centers is built. By means of the global optimization ability of the genetic algorithm, generational evolution produces better approximate solution, and gets the best approximation of the data placement at last. The experimental results show that genetic algorithm can effectively work out the approximate optimal data placement, and minimize data scheduling between data centers.
文摘The complexity of cloud environments challenges secure resource management,especially for intrusion detection systems(IDS).Existing strategies struggle to balance efficiency,cost fairness,and threat resilience.This paper proposes an innovative approach to managing cloud resources through the integration of a genetic algorithm(GA)with a“double auction”method.This approach seeks to enhance security and efficiency by aligning buyers and sellers within an intelligent market framework.It guarantees equitable pricing while utilizing resources efficiently and optimizing advantages for all stakeholders.The GA functions as an intelligent search mechanism that identifies optimal combinations of bids from users and suppliers,addressing issues arising from the intricacies of cloud systems.Analyses proved that our method surpasses previous strategies,particularly in terms of price accuracy,speed,and the capacity to manage large-scale activities,critical factors for real-time cybersecurity systems,such as IDS.Our research integrates artificial intelligence-inspired evolutionary algorithms with market-driven methods to develop intelligent resource management systems that are secure,scalable,and adaptable to evolving risks,such as process innovation.
基金supported by the National Natural Science Foundation of China (No. 61741102, No. 61471164)China Scholarship Council
文摘The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing.
基金Acknowledgments: This work has been supported by the National Grand Fundamental Research 973 Program of China under Grant No.2007CB310800 and the National Natural Science Foundation of China under Grant No. 60496323.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.
基金National Nature Science Foundation of China under-Grant 61461053,Grant 61461054,and Grant 61072079Yunnan Provincial Education Department Scientific Research Fund Project under-Grant 2022Y008+1 种基金Yunnan University’s Research Innovation Fund for Graduate Students under-Grant KC-22222706project of fund YNWR-QNBJ under-Grant 2018-310.
文摘The Equilibrium Optimiser(EO)has been demonstrated to be one of the metaheuristic algorithms that can effectively solve global optimisation problems.Balancing the paradox between exploration and exploitation operations while enhancing the ability to jump out of the local optimum are two key points to be addressed in EO research.To alleviate these limitations,an EO variant named adaptive elite-guided Equilibrium Optimiser(AEEO)is introduced.Specifically,the adaptive elite-guided search mechanism enhances the balance between exploration and exploitation.The modified mutualism phase reinforces the information interaction among particles and local optima avoidance.The cooperation of these two mechanisms boosts the overall performance of the basic EO.The AEEO is subjected to competitive experiments with state-of-the-art algorithms and modified algorithms on 23 classical benchmark functions and IEE CEC 2017 function test suite.Experimental results demonstrate that AEEO outperforms several well-performing EO variants,DE variants,PSO variants,SSA variants,and GWO variants in terms of convergence speed and accuracy.In addition,the AEEO algorithm is used for the edge server(ES)placement problem in mobile edge computing(MEC)environments.The experimental results show that the author’s approach outperforms the representative approaches compared in terms of access latency and deployment cost.