Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will ...Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.展开更多
Social media,like Twitter,is a data repository,and people exchange views on global issues like the COVID-19 pandemic.Social media has been shown to influence the low acceptance of vaccines.This work aims to identify p...Social media,like Twitter,is a data repository,and people exchange views on global issues like the COVID-19 pandemic.Social media has been shown to influence the low acceptance of vaccines.This work aims to identify public sentiments concerning the COVID-19 vaccines and better understand the individual’s sensitivities and feelings that lead to achievement.This work proposes a method to analyze the opinion of an individual’s tweet about the COVID-19 vaccines.This paper introduces a sigmoidal particle swarm optimization(SPSO)algorithm.First,the performance of SPSO is measured on a set of 12 benchmark problems,and later it is deployed for selecting optimal text features and categorizing sentiment.The proposed method uses TextBlob and VADER for sentiment analysis,CountVectorizer,and term frequency-inverse document frequency(TF-IDF)vectorizer for feature extraction,followed by SPSO-based feature selection.The Covid-19 vaccination tweets dataset was created and used for training,validating,and testing.The proposed approach outperformed considered algorithms in terms of accuracy.Additionally,we augmented the newly created dataset to make it balanced to increase performance.A classical support vector machine(SVM)gives better accuracy for the augmented dataset without a feature selection algorithm.It shows that augmentation improves the overall accuracy of tweet analysis.After the augmentation performance of PSO and SPSO is improved by almost 7%and 5%,respectively,it is observed that simple SVMwith 10-fold cross-validation significantly improved compared to the primary dataset.展开更多
One of the challenging problems with evolutionary computing algorithms is to maintain the balance between exploration and exploitation capability in order to search global optima.A novel convergence track based adapti...One of the challenging problems with evolutionary computing algorithms is to maintain the balance between exploration and exploitation capability in order to search global optima.A novel convergence track based adaptive differential evolution(CTbADE)algorithm is presented in this research paper.The crossover rate and mutation probability parameters in a differential evolution algorithm have a significant role in searching global optima.A more diverse population improves the global searching capability and helps to escape from the local optima problem.Tracking the convergence path over time helps enhance the searching speed of a differential evolution algorithm for varying problems.An adaptive powerful parameter-controlled sequences utilized learning period-based memory and following convergence track over time are introduced in this paper.The proposed algorithm will be helpful in maintaining the equilibrium between an algorithm’s exploration and exploitation capability.A comprehensive test suite of standard benchmark problems with different natures,i.e.,unimodal/multimodal and separable/non-separable,was used to test the convergence power of the proposed CTbADE algorithm.Experimental results show the significant performance of the CTbADE algorithm in terms of average fitness,solution quality,and convergence speed when compared with standard differential evolution algorithms and a few other commonly used state-of-the-art algorithms,such as jDE,CoDE,and EPSDE algorithms.This algorithm will prove to be a significant addition to the literature in order to solve real time problems and to optimize computationalmodels with a high number of parameters to adjust during the problem-solving process.展开更多
Wireless nodes are one of the main components in different applications that are offered in a smart city.These wireless nodes are responsible to execute multiple tasks with different priority levels.As the wireless no...Wireless nodes are one of the main components in different applications that are offered in a smart city.These wireless nodes are responsible to execute multiple tasks with different priority levels.As the wireless nodes have limited processing capacity,they offload their tasks to cloud servers if the number of tasks exceeds their task processing capacity.Executing these tasks from remotely placed cloud servers causes a significant delay which is not required in sensitive task applications.This execution delay is reduced by placing fog computing nodes near these application nodes.A fog node has limited processing capacity and is sometimes unable to execute all the requested tasks.In this work,an optimal task offloading scheme that comprises two algorithms is proposed for the fog nodes to optimally execute the time-sensitive offloaded tasks.The first algorithm describes the task processing criteria for local computation of tasks at the fog nodes and remote computation at the cloud server.The second algorithm allows fog nodes to optimally scrutinize the most sensitive tasks within their task capacity.The results show that the proposed task execution scheme significantly reduces the execution time and most of the time-sensitive tasks are executed.展开更多
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23082).
文摘Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.
基金supported by Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through project number 959.
文摘Social media,like Twitter,is a data repository,and people exchange views on global issues like the COVID-19 pandemic.Social media has been shown to influence the low acceptance of vaccines.This work aims to identify public sentiments concerning the COVID-19 vaccines and better understand the individual’s sensitivities and feelings that lead to achievement.This work proposes a method to analyze the opinion of an individual’s tweet about the COVID-19 vaccines.This paper introduces a sigmoidal particle swarm optimization(SPSO)algorithm.First,the performance of SPSO is measured on a set of 12 benchmark problems,and later it is deployed for selecting optimal text features and categorizing sentiment.The proposed method uses TextBlob and VADER for sentiment analysis,CountVectorizer,and term frequency-inverse document frequency(TF-IDF)vectorizer for feature extraction,followed by SPSO-based feature selection.The Covid-19 vaccination tweets dataset was created and used for training,validating,and testing.The proposed approach outperformed considered algorithms in terms of accuracy.Additionally,we augmented the newly created dataset to make it balanced to increase performance.A classical support vector machine(SVM)gives better accuracy for the augmented dataset without a feature selection algorithm.It shows that augmentation improves the overall accuracy of tweet analysis.After the augmentation performance of PSO and SPSO is improved by almost 7%and 5%,respectively,it is observed that simple SVMwith 10-fold cross-validation significantly improved compared to the primary dataset.
基金This work was supported by the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,which funded this research work through project number 959.
文摘One of the challenging problems with evolutionary computing algorithms is to maintain the balance between exploration and exploitation capability in order to search global optima.A novel convergence track based adaptive differential evolution(CTbADE)algorithm is presented in this research paper.The crossover rate and mutation probability parameters in a differential evolution algorithm have a significant role in searching global optima.A more diverse population improves the global searching capability and helps to escape from the local optima problem.Tracking the convergence path over time helps enhance the searching speed of a differential evolution algorithm for varying problems.An adaptive powerful parameter-controlled sequences utilized learning period-based memory and following convergence track over time are introduced in this paper.The proposed algorithm will be helpful in maintaining the equilibrium between an algorithm’s exploration and exploitation capability.A comprehensive test suite of standard benchmark problems with different natures,i.e.,unimodal/multimodal and separable/non-separable,was used to test the convergence power of the proposed CTbADE algorithm.Experimental results show the significant performance of the CTbADE algorithm in terms of average fitness,solution quality,and convergence speed when compared with standard differential evolution algorithms and a few other commonly used state-of-the-art algorithms,such as jDE,CoDE,and EPSDE algorithms.This algorithm will prove to be a significant addition to the literature in order to solve real time problems and to optimize computationalmodels with a high number of parameters to adjust during the problem-solving process.
基金The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no.RG-21-07-06.
文摘Wireless nodes are one of the main components in different applications that are offered in a smart city.These wireless nodes are responsible to execute multiple tasks with different priority levels.As the wireless nodes have limited processing capacity,they offload their tasks to cloud servers if the number of tasks exceeds their task processing capacity.Executing these tasks from remotely placed cloud servers causes a significant delay which is not required in sensitive task applications.This execution delay is reduced by placing fog computing nodes near these application nodes.A fog node has limited processing capacity and is sometimes unable to execute all the requested tasks.In this work,an optimal task offloading scheme that comprises two algorithms is proposed for the fog nodes to optimally execute the time-sensitive offloaded tasks.The first algorithm describes the task processing criteria for local computation of tasks at the fog nodes and remote computation at the cloud server.The second algorithm allows fog nodes to optimally scrutinize the most sensitive tasks within their task capacity.The results show that the proposed task execution scheme significantly reduces the execution time and most of the time-sensitive tasks are executed.