Learning of the feedforward multilayer perceptron (MLP) networks is to adapt all synaptic weights in such a way that the discrepancy between the actual output signals and the desired signals, averaged over all learnin...Learning of the feedforward multilayer perceptron (MLP) networks is to adapt all synaptic weights in such a way that the discrepancy between the actual output signals and the desired signals, averaged over all learning examples (training patterns), is as small as possible. The backpropagation, or variations thereof, is a standard method applied to adjust the synaptic weights in the network in order to minimize a given cost function. However as a steepest descent approach, BP algorithm is too slow for many applications. Since late 1980s lots of efforts have been reported in the literature aimed at improving the efficiency of the algorithm. Among them a recently proposed learning strategy based on linearization of the nonlinear activation functions and optimization of the multilayer perceptron layer by layer (OLL) seems promising. In this paper a modified learning procedure is presented which tries to find a weight change vector at each trial iteration in the OLL algorithm more efficiently. The proposed learning procedure can save expensive computation efforts and yield better convergence rate as compared to the original OLL learning algorithms especially for large scale networks. The improved OLL learning algorithm is applied to the time series prediction problems presented by the OLL authors, and demonstrates a faster learning capability.展开更多
Global warming is one of the most complicated challenges of our time causing considerable tension on our societies and on the environment.The impacts of global warming are felt unprecedentedly in a wide variety of way...Global warming is one of the most complicated challenges of our time causing considerable tension on our societies and on the environment.The impacts of global warming are felt unprecedentedly in a wide variety of ways from shifting weather patterns that threatens food production,to rising sea levels that deteriorates the risk of catastrophic flooding.Among all aspects related to global warming,there is a growing concern on water resource management.This field is targeted at preventing future water crisis threatening human beings.The very first stage in such management is to recognize the prospective climate parameters influencing the future water resource conditions.Numerous prediction models,methods and tools,in this case,have been developed and applied so far.In line with trend,the current study intends to compare three optimization algorithms on the platform of a multilayer perceptron(MLP)network to explore any meaningful connection between large-scale climate indices(LSCIs)and precipitation in the capital of Iran,a country which is located in an arid and semi-arid region and suffers from severe water scarcity caused by mismanagement over years and intensified by global warming.This situation has propelled a great deal of population to immigrate towards more developed cities within the country especially towards Tehran.Therefore,the current and future environmental conditions of this city especially its water supply conditions are of great importance.To tackle this complication an outlook for the future precipitation should be provided and appropriate forecasting trajectories compatible with this region's characteristics should be developed.To this end,the present study investigates three training methods namely backpropagation(BP),genetic algorithms(GAs),and particle swarm optimization(PSO)algorithms on a MLP platform.Two frameworks distinguished by their input compositions are denoted in this study:Concurrent Model Framework(CMF)and Integrated Model Framework(IMF).Through these two frameworks,13 cases are generated:12 cases within CMF,each of which contains all selected LSCIs in the same lead-times,and one case within IMF that is constituted from the combination of the most correlated LSCIs with Tehran precipitation in each lead-time.Following the evaluation of all model performances through related statistical tests,Taylor diagram is implemented to make comparison among the final selected models in all three optimization algorithms,the best of which is found to be MLP-PSO in IMF.展开更多
Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptiv...Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptive error curve learning ensemble(GA-ECLE)model.The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach.A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy.This approach combines three models,namely CatBoost(CB),Gradient Boost(GB),and Multilayer Perceptron(MLP).The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network.A genetic algorithm is used to obtain the optimal features to be used for the model.To prove the proposed model’s effectiveness,we have used a four-phase technique using Jeju island’s real energy consumption data.In the first phase,we have obtained the results by applying the CB-GB-MLP model.In the second phase,we have utilized a GA-ensembled model with optimal features.The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model.The fourth stage is the final stage,where we have applied the GA-ECLE model.We obtained a mean absolute error of 3.05,and a root mean square error of 5.05.Extensive experimental results are provided,demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models.展开更多
In this study, the author will investigate and utilize advanced machine learning models related to two different methodologies to determine the best and most effective way to predict individuals with heart failure and...In this study, the author will investigate and utilize advanced machine learning models related to two different methodologies to determine the best and most effective way to predict individuals with heart failure and cardiovascular diseases. The first methodology involves a list of classification machine learning algorithms, and the second methodology involves the use of a deep learning algorithm known as MLP or Multilayer Perceptrons. Globally, hospitals are dealing with cases related to cardiovascular diseases and heart failure as they are major causes of death, not only for overweight individuals but also for those who do not adopt a healthy diet and lifestyle. Often, heart failures and cardiovascular diseases can be caused by many factors, including cardiomyopathy, high blood pressure, coronary heart disease, and heart inflammation [1]. Other factors, such as irregular shocks or stress, can also contribute to heart failure or a heart attack. While these events cannot be predicted, continuous data from patients’ health can help doctors predict heart failure. Therefore, this data-driven research utilizes advanced machine learning and deep learning techniques to better analyze and manipulate the data, providing doctors with informative decision-making tools regarding a person’s likelihood of experiencing heart failure. In this paper, the author employed advanced data preprocessing and cleaning techniques. Additionally, the dataset underwent testing using two different methodologies to determine the most effective machine-learning technique for producing optimal predictions. The first methodology involved employing a list of supervised classification machine learning algorithms, including Naïve Bayes (NB), KNN, logistic regression, and the SVM algorithm. The second methodology utilized a deep learning (DL) algorithm known as Multilayer Perceptrons (MLPs). This algorithm provided the author with the flexibility to experiment with different layer sizes and activation functions, such as ReLU, logistic (sigmoid), and Tanh. Both methodologies produced optimal models with high-level accuracy rates. The first methodology involves a list of supervised machine learning algorithms, including KNN, SVM, Adaboost, Logistic Regression, Naive Bayes, and Decision Tree algorithms. They achieved accuracy rates of 86%, 89%, 89%, 81%, 79%, and 99%, respectively. The author clearly explained that Decision Tree algorithm is not suitable for the dataset at hand due to overfitting issues. Therefore, it was discarded as an optimal model to be used. However, the latter methodology (Neural Network) demonstrated the most stable and optimal accuracy, achieving over 87% accuracy while adapting well to real-life situations and requiring low computing power overall. A performance assessment and evaluation were carried out based on a confusion matrix report to demonstrate feasibility and performance. The author concluded that the performance of the model in real-life situations can advance not only the medical field of science but also mathematical concepts. Additionally, the advanced preprocessing approach behind the model can provide value to the Data Science community. The model can be further developed by employing various optimization techniques to handle even larger datasets related to heart failures. Furthermore, different neural network algorithms can be tested to explore alternative approaches and yield different results.展开更多
Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area wit...Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area within artificial intelligence(AI)that focuses on obtaining valuable information out of data,explaining why ML has often been related to stats and data science.An advanced meta-heuristic optimization algorithm is proposed in this work for the optimization problem of antenna architecture design.The algorithm is designed,depending on the hybrid between the Sine Cosine Algorithm(SCA)and the Grey Wolf Optimizer(GWO),to train neural networkbased Multilayer Perceptron(MLP).The proposed optimization algorithm is a practical,versatile,and trustworthy platform to recognize the design parameters in an optimal way for an endorsement double T-shaped monopole antenna.The proposed algorithm likewise shows a comparative and statistical analysis by different curves in addition to the ANOVA and T-Test.It offers the superiority and validation stability evaluation of the predicted results to verify the procedures’accuracy.展开更多
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ...A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.展开更多
针对推荐算法中的数据稀疏性和冷启动问题,提出了基于卷积神经网络的结合时间特征的协同过滤深度推荐算法(CNN-deep recommend algorithm with time,C-DRAWT)与基于多层感知机的结合时间特征的协同过滤深度推荐算法(MLP-deep recommend ...针对推荐算法中的数据稀疏性和冷启动问题,提出了基于卷积神经网络的结合时间特征的协同过滤深度推荐算法(CNN-deep recommend algorithm with time,C-DRAWT)与基于多层感知机的结合时间特征的协同过滤深度推荐算法(MLP-deep recommend algorithm with time,M-DRAWT)。算法进行数据预处理,利用二进制来编码用户与项目的信息,缓解了one-hot编码的书籍稀疏性问题。提取出用户与项目的隐藏特征,将用户和项目的特征融合时间戳特征,分别输入到优化后的卷积神经网络和多层感知机进行,得到最新时刻的推荐项目。两个算法经过基于MovieLens-1M数据集的对比实验验证,得到的F1-Score值平均提高了0.78%,RMSE值平均提高了2.7%。结果表明,该方法能够缓解数据稀疏性和冷启动问题,相比较于之前的模型具有较好的推荐效果。展开更多
基金supported by the National Natural Sciences Foundation of China.
文摘Learning of the feedforward multilayer perceptron (MLP) networks is to adapt all synaptic weights in such a way that the discrepancy between the actual output signals and the desired signals, averaged over all learning examples (training patterns), is as small as possible. The backpropagation, or variations thereof, is a standard method applied to adjust the synaptic weights in the network in order to minimize a given cost function. However as a steepest descent approach, BP algorithm is too slow for many applications. Since late 1980s lots of efforts have been reported in the literature aimed at improving the efficiency of the algorithm. Among them a recently proposed learning strategy based on linearization of the nonlinear activation functions and optimization of the multilayer perceptron layer by layer (OLL) seems promising. In this paper a modified learning procedure is presented which tries to find a weight change vector at each trial iteration in the OLL algorithm more efficiently. The proposed learning procedure can save expensive computation efforts and yield better convergence rate as compared to the original OLL learning algorithms especially for large scale networks. The improved OLL learning algorithm is applied to the time series prediction problems presented by the OLL authors, and demonstrates a faster learning capability.
文摘Global warming is one of the most complicated challenges of our time causing considerable tension on our societies and on the environment.The impacts of global warming are felt unprecedentedly in a wide variety of ways from shifting weather patterns that threatens food production,to rising sea levels that deteriorates the risk of catastrophic flooding.Among all aspects related to global warming,there is a growing concern on water resource management.This field is targeted at preventing future water crisis threatening human beings.The very first stage in such management is to recognize the prospective climate parameters influencing the future water resource conditions.Numerous prediction models,methods and tools,in this case,have been developed and applied so far.In line with trend,the current study intends to compare three optimization algorithms on the platform of a multilayer perceptron(MLP)network to explore any meaningful connection between large-scale climate indices(LSCIs)and precipitation in the capital of Iran,a country which is located in an arid and semi-arid region and suffers from severe water scarcity caused by mismanagement over years and intensified by global warming.This situation has propelled a great deal of population to immigrate towards more developed cities within the country especially towards Tehran.Therefore,the current and future environmental conditions of this city especially its water supply conditions are of great importance.To tackle this complication an outlook for the future precipitation should be provided and appropriate forecasting trajectories compatible with this region's characteristics should be developed.To this end,the present study investigates three training methods namely backpropagation(BP),genetic algorithms(GAs),and particle swarm optimization(PSO)algorithms on a MLP platform.Two frameworks distinguished by their input compositions are denoted in this study:Concurrent Model Framework(CMF)and Integrated Model Framework(IMF).Through these two frameworks,13 cases are generated:12 cases within CMF,each of which contains all selected LSCIs in the same lead-times,and one case within IMF that is constituted from the combination of the most correlated LSCIs with Tehran precipitation in each lead-time.Following the evaluation of all model performances through related statistical tests,Taylor diagram is implemented to make comparison among the final selected models in all three optimization algorithms,the best of which is found to be MLP-PSO in IMF.
基金This research was financially supported by the Ministry of Small and Mediumsized Enterprises(SMEs)and Startups(MSS),Korea,under the“Regional Specialized Industry Development Program(R&D,S2855401)”supervised by the Korea Institute for Advancement of Technology(KIAT).
文摘Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptive error curve learning ensemble(GA-ECLE)model.The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach.A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy.This approach combines three models,namely CatBoost(CB),Gradient Boost(GB),and Multilayer Perceptron(MLP).The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network.A genetic algorithm is used to obtain the optimal features to be used for the model.To prove the proposed model’s effectiveness,we have used a four-phase technique using Jeju island’s real energy consumption data.In the first phase,we have obtained the results by applying the CB-GB-MLP model.In the second phase,we have utilized a GA-ensembled model with optimal features.The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model.The fourth stage is the final stage,where we have applied the GA-ECLE model.We obtained a mean absolute error of 3.05,and a root mean square error of 5.05.Extensive experimental results are provided,demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models.
文摘In this study, the author will investigate and utilize advanced machine learning models related to two different methodologies to determine the best and most effective way to predict individuals with heart failure and cardiovascular diseases. The first methodology involves a list of classification machine learning algorithms, and the second methodology involves the use of a deep learning algorithm known as MLP or Multilayer Perceptrons. Globally, hospitals are dealing with cases related to cardiovascular diseases and heart failure as they are major causes of death, not only for overweight individuals but also for those who do not adopt a healthy diet and lifestyle. Often, heart failures and cardiovascular diseases can be caused by many factors, including cardiomyopathy, high blood pressure, coronary heart disease, and heart inflammation [1]. Other factors, such as irregular shocks or stress, can also contribute to heart failure or a heart attack. While these events cannot be predicted, continuous data from patients’ health can help doctors predict heart failure. Therefore, this data-driven research utilizes advanced machine learning and deep learning techniques to better analyze and manipulate the data, providing doctors with informative decision-making tools regarding a person’s likelihood of experiencing heart failure. In this paper, the author employed advanced data preprocessing and cleaning techniques. Additionally, the dataset underwent testing using two different methodologies to determine the most effective machine-learning technique for producing optimal predictions. The first methodology involved employing a list of supervised classification machine learning algorithms, including Naïve Bayes (NB), KNN, logistic regression, and the SVM algorithm. The second methodology utilized a deep learning (DL) algorithm known as Multilayer Perceptrons (MLPs). This algorithm provided the author with the flexibility to experiment with different layer sizes and activation functions, such as ReLU, logistic (sigmoid), and Tanh. Both methodologies produced optimal models with high-level accuracy rates. The first methodology involves a list of supervised machine learning algorithms, including KNN, SVM, Adaboost, Logistic Regression, Naive Bayes, and Decision Tree algorithms. They achieved accuracy rates of 86%, 89%, 89%, 81%, 79%, and 99%, respectively. The author clearly explained that Decision Tree algorithm is not suitable for the dataset at hand due to overfitting issues. Therefore, it was discarded as an optimal model to be used. However, the latter methodology (Neural Network) demonstrated the most stable and optimal accuracy, achieving over 87% accuracy while adapting well to real-life situations and requiring low computing power overall. A performance assessment and evaluation were carried out based on a confusion matrix report to demonstrate feasibility and performance. The author concluded that the performance of the model in real-life situations can advance not only the medical field of science but also mathematical concepts. Additionally, the advanced preprocessing approach behind the model can provide value to the Data Science community. The model can be further developed by employing various optimization techniques to handle even larger datasets related to heart failures. Furthermore, different neural network algorithms can be tested to explore alternative approaches and yield different results.
文摘Machine learning(ML)has taken the world by a tornado with its prevalent applications in automating ordinary tasks and using turbulent insights throughout scientific research and design strolls.ML is a massive area within artificial intelligence(AI)that focuses on obtaining valuable information out of data,explaining why ML has often been related to stats and data science.An advanced meta-heuristic optimization algorithm is proposed in this work for the optimization problem of antenna architecture design.The algorithm is designed,depending on the hybrid between the Sine Cosine Algorithm(SCA)and the Grey Wolf Optimizer(GWO),to train neural networkbased Multilayer Perceptron(MLP).The proposed optimization algorithm is a practical,versatile,and trustworthy platform to recognize the design parameters in an optimal way for an endorsement double T-shaped monopole antenna.The proposed algorithm likewise shows a comparative and statistical analysis by different curves in addition to the ANOVA and T-Test.It offers the superiority and validation stability evaluation of the predicted results to verify the procedures’accuracy.
基金the Natural Science Foundation of China (No. 30070211).
文摘A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.
文摘针对推荐算法中的数据稀疏性和冷启动问题,提出了基于卷积神经网络的结合时间特征的协同过滤深度推荐算法(CNN-deep recommend algorithm with time,C-DRAWT)与基于多层感知机的结合时间特征的协同过滤深度推荐算法(MLP-deep recommend algorithm with time,M-DRAWT)。算法进行数据预处理,利用二进制来编码用户与项目的信息,缓解了one-hot编码的书籍稀疏性问题。提取出用户与项目的隐藏特征,将用户和项目的特征融合时间戳特征,分别输入到优化后的卷积神经网络和多层感知机进行,得到最新时刻的推荐项目。两个算法经过基于MovieLens-1M数据集的对比实验验证,得到的F1-Score值平均提高了0.78%,RMSE值平均提高了2.7%。结果表明,该方法能够缓解数据稀疏性和冷启动问题,相比较于之前的模型具有较好的推荐效果。