Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced...Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.展开更多
This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o...This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open...In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.展开更多
A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc...A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.展开更多
Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for ...Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.展开更多
With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high ...With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high prediction errors.Therefore,this paper utilizes a two-step procedure.First,the outliers are pro-cessed using the box chart method and filtering algorithm.Then,the decision tree(DT),support vector regression(SVR),random forest(RF),and the bagging,boosting,and stacking integration algorithms are employed to construct a flotation recovery prediction model.Extensive experiments compared the prediction accuracy of six modeling methods on flotation recovery and delved into the impact of diverse base model combinations on the stacking model’s prediction accuracy.In addition,field data have veri-fied the model’s effectiveness.This study demonstrates that the stacking ensemble approaches,which uses ten variables to predict flotation recovery,yields a more favorable prediction effect than the bagging ensemble approach and single models,achieving MAE,RMSE,R2,and MRE scores of 0.929,1.370,0.843,and 1.229%,respectively.The hit rates,within an error range of±2%and±4%,are 82.4%and 94.6%.Consequently,the prediction effect is relatively precise and offers significant value in the context of actual production.展开更多
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence o...This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.展开更多
Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we giv...Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.展开更多
In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent re...In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.展开更多
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ...A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.展开更多
文摘Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.
文摘This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
文摘In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.
文摘A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.
文摘Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.
基金supported by the National Key R&D Program of China(No.2023YFC2908200)National Natural Science Foundation of China(No.52174249)Key Research and Development Program of Jiangxi Province(No.20203BBGL73231).
文摘With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high prediction errors.Therefore,this paper utilizes a two-step procedure.First,the outliers are pro-cessed using the box chart method and filtering algorithm.Then,the decision tree(DT),support vector regression(SVR),random forest(RF),and the bagging,boosting,and stacking integration algorithms are employed to construct a flotation recovery prediction model.Extensive experiments compared the prediction accuracy of six modeling methods on flotation recovery and delved into the impact of diverse base model combinations on the stacking model’s prediction accuracy.In addition,field data have veri-fied the model’s effectiveness.This study demonstrates that the stacking ensemble approaches,which uses ten variables to predict flotation recovery,yields a more favorable prediction effect than the bagging ensemble approach and single models,achieving MAE,RMSE,R2,and MRE scores of 0.929,1.370,0.843,and 1.229%,respectively.The hit rates,within an error range of±2%and±4%,are 82.4%and 94.6%.Consequently,the prediction effect is relatively precise and offers significant value in the context of actual production.
文摘This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters.
基金This work was supported by National Natural Science Foun-dation of China(11271136,81530086)Program of Shanghai Subject Chief Scientist(14XD1401600)the 111 Project of China(No.B14019).
文摘Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.
文摘In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.
基金the Natural Science Foundation of China (No. 30070211).
文摘A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.