The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artifici...The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artificial Neural Network(ANN),Quadratic Discriminant Analysis(QDA),Linear Discriminant Analysis(LDA),and Naive Bayes(NB),for landslide susceptibility modeling and comparison of their performances.Coupling machine learning algorithms with spatial data types for landslide susceptibility mapping is a vitally important issue.This study was carried out using GIS and R open source software at Abha Basin,Asir Region,Saudi Arabia.First,a total of 243 landslide locations were identified at Abha Basin to prepare the landslide inventory map using different data sources.All the landslide areas were randomly separated into two groups with a ratio of 70%for training and 30%for validating purposes.Twelve landslide-variables were generated for landslide susceptibility modeling,which include altitude,lithology,distance to faults,normalized difference vegetation index(NDVI),landuse/landcover(LULC),distance to roads,slope angle,distance to streams,profile curvature,plan curvature,slope length(LS),and slope-aspect.The area under curve(AUC-ROC)approach has been applied to evaluate,validate,and compare the MLTs performance.The results indicated that AUC values for seven MLTs range from 89.0%for QDA to 95.1%for RF.Our findings showed that the RF(AUC=95.1%)and LDA(AUC=941.7%)have produced the best performances in comparison to other MLTs.The outcome of this study and the landslide susceptibility maps would be useful for environmental protection.展开更多
This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linea...This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.展开更多
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced...Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.展开更多
This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o...This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm展开更多
A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc...A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.展开更多
Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for ...Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.展开更多
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de...For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.展开更多
With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high ...With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high prediction errors.Therefore,this paper utilizes a two-step procedure.First,the outliers are pro-cessed using the box chart method and filtering algorithm.Then,the decision tree(DT),support vector regression(SVR),random forest(RF),and the bagging,boosting,and stacking integration algorithms are employed to construct a flotation recovery prediction model.Extensive experiments compared the prediction accuracy of six modeling methods on flotation recovery and delved into the impact of diverse base model combinations on the stacking model’s prediction accuracy.In addition,field data have veri-fied the model’s effectiveness.This study demonstrates that the stacking ensemble approaches,which uses ten variables to predict flotation recovery,yields a more favorable prediction effect than the bagging ensemble approach and single models,achieving MAE,RMSE,R2,and MRE scores of 0.929,1.370,0.843,and 1.229%,respectively.The hit rates,within an error range of±2%and±4%,are 82.4%and 94.6%.Consequently,the prediction effect is relatively precise and offers significant value in the context of actual production.展开更多
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In ...Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost fimction. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.展开更多
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open...In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.展开更多
Background:Studies have shown that heart rate variability(HRV)is a predictor of the prognosis of cardiovascular diseases.Contact heartbeat monitoring equipment is widely used,especially in hospitals,and benefits from ...Background:Studies have shown that heart rate variability(HRV)is a predictor of the prognosis of cardiovascular diseases.Contact heartbeat monitoring equipment is widely used,especially in hospitals,and benefits from the rapidity and accuracy of the detection of physiological health indicators.However,long-term contact with equipment has many adverse effects.The purpose of this study was to improve the accuracy of HRV detection via noncontact equipment,thus enabling HRV to be assessed in various scenarios.Methods:A novel deep learning approach was proposed for measuring heartbeats through camera videos.First,we performed facial segmentation and divided the face into 16 grid cells with different light balance scores.After the trend is filtered by the Hamming window,a transformer-based neural network is used to further filter the signal.Finally,heart rate(HR)and HRV are estimated.Results:We used 1 million synthetic data points for pretraining and a public dataset in combination with a dataset that we constructed for task training.The final results were obtained on a test dataset that we constructed.The accuracy for HR with a low light balance score(0.867-0.983)was greater than that with a high score(0.667-0.750).Our method had higher accuracy in estimating HR than traditional filtering methods(0.167-0.417)and state-of-the-art neural network filtering methods(0.783-0.917)did.The root mean square error of the HRV from the time domain was the lowest,and the correlation index score was the highest for the HRV from the frequency domain estimated by our method compared with those estimated by two neural networks.Conclusions:Light balance,large sample training,and two-stage training can improve the accuracy of HRV estimation.展开更多
Interval Uncertainty Propagation(IUP)holds significant importance in quantifying uncertainties in structural outputs when confronted with interval input parameters.In the aviation field,the precise determination of pr...Interval Uncertainty Propagation(IUP)holds significant importance in quantifying uncertainties in structural outputs when confronted with interval input parameters.In the aviation field,the precise determination of probability models for input parameters of aeronautical structures entails substantial costs in both time and finances.As an alternative,the use of interval variables to describe input parameter uncertainty becomes a pragmatic approach.The complex task of solving the IUP for aeronautical structures,particularly in scenarios marked by pronounced nonlinearity and multiple outputs,necessitates innovative methodologies.This study introduces an efficient deep learning-driven approach to address the challenges associated with IUP.The proposed approach combines the Deep Neural Network(DNN)with intelligent optimization algorithms for dealing with the IUP in aeronautical structures.An inventive extremal value-oriented weighting technique is presented,assigning varying weights to different training samples within the loss function,thereby enhancing the computational accuracy of the DNN in predicting extremal values of structural outputs.Moreover,an adaptive framework is established to strategically balance the global exploration and local exploitation capabilities of the DNN,resulting in a predictive model that is both robust and accurate.To illustrate the effectiveness of the developed approach,various applications are explored,including a high-dimensional numerical example and two aeronautical structures.The obtained results highlight the high computational accuracy and efficiency achieved by the proposed approach,showcasing its potential for addressing complex IUP challenges in aeronautical engineering.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent re...In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.展开更多
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ...A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.展开更多
In this article,we comment on the study by Yang et al,which demonstrated significant cross-sectional associations between heart rate variability(HRV)indices,depressive symptoms,and lung function in patients with chron...In this article,we comment on the study by Yang et al,which demonstrated significant cross-sectional associations between heart rate variability(HRV)indices,depressive symptoms,and lung function in patients with chronic obstructive pulmonary disease(COPD).Building on these findings,we further explore the underlying mechanisms,particularly inflammatory-autonomic-oxidative stress pathways,as key causal mediators.Moreover,analyzing genetic polymorphisms alongside environmental factors may uncover susceptibility pathways explaining interindividual differences in HRV and comorbidity risk.Additionally,longitudinal studies tracking HRV trajectories could identify thresholds predictive of accelerated lung function decline or cardiovascular events,informing personalized prevention strategies.Integrating longitudinal HRV data with multi-omics biomarkers and machine learning models could enable real-time prediction of depression relapses or COPD exacerbations,facilitating proactive interventions such as personalized biofeedback training or precision anti-inflammatory therapies.By synthesizing these perspectives,this integrative approach promises to advance precision medicine for COPD patients,particularly those with comorbid depression,by addressing both mechanistic insights and clinical translation.展开更多
文摘The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artificial Neural Network(ANN),Quadratic Discriminant Analysis(QDA),Linear Discriminant Analysis(LDA),and Naive Bayes(NB),for landslide susceptibility modeling and comparison of their performances.Coupling machine learning algorithms with spatial data types for landslide susceptibility mapping is a vitally important issue.This study was carried out using GIS and R open source software at Abha Basin,Asir Region,Saudi Arabia.First,a total of 243 landslide locations were identified at Abha Basin to prepare the landslide inventory map using different data sources.All the landslide areas were randomly separated into two groups with a ratio of 70%for training and 30%for validating purposes.Twelve landslide-variables were generated for landslide susceptibility modeling,which include altitude,lithology,distance to faults,normalized difference vegetation index(NDVI),landuse/landcover(LULC),distance to roads,slope angle,distance to streams,profile curvature,plan curvature,slope length(LS),and slope-aspect.The area under curve(AUC-ROC)approach has been applied to evaluate,validate,and compare the MLTs performance.The results indicated that AUC values for seven MLTs range from 89.0%for QDA to 95.1%for RF.Our findings showed that the RF(AUC=95.1%)and LDA(AUC=941.7%)have produced the best performances in comparison to other MLTs.The outcome of this study and the landslide susceptibility maps would be useful for environmental protection.
基金supported by the College of Agriculture,Shiraz University(Grant No.97GRC1M271143)funding from the UK Biotechnology and Biological Sciences Research Council(BBSRC)funded by BBSRC grant award BBS/E/C/000I0330–Soil to Nutrition project 3–Sustainable intensification:optimisation at multiple scales。
文摘This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.
文摘Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.
文摘This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm
文摘A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry.
文摘Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations.
基金Supported by the National Natural Science Foundation of China (60904018, 61203040)the Natural Science Foundation of Fujian Province of China (2009J05147, 2011J01352)+1 种基金the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004)the Science Research Foundation of Huaqiao University (09BS617)
文摘For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.
基金supported by the National Key R&D Program of China(No.2023YFC2908200)National Natural Science Foundation of China(No.52174249)Key Research and Development Program of Jiangxi Province(No.20203BBGL73231).
文摘With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high prediction errors.Therefore,this paper utilizes a two-step procedure.First,the outliers are pro-cessed using the box chart method and filtering algorithm.Then,the decision tree(DT),support vector regression(SVR),random forest(RF),and the bagging,boosting,and stacking integration algorithms are employed to construct a flotation recovery prediction model.Extensive experiments compared the prediction accuracy of six modeling methods on flotation recovery and delved into the impact of diverse base model combinations on the stacking model’s prediction accuracy.In addition,field data have veri-fied the model’s effectiveness.This study demonstrates that the stacking ensemble approaches,which uses ten variables to predict flotation recovery,yields a more favorable prediction effect than the bagging ensemble approach and single models,achieving MAE,RMSE,R2,and MRE scores of 0.929,1.370,0.843,and 1.229%,respectively.The hit rates,within an error range of±2%and±4%,are 82.4%and 94.6%.Consequently,the prediction effect is relatively precise and offers significant value in the context of actual production.
基金Supported by National Natural Science Foundation of China(Grant Nos.51275366,50875190,51305311)Specialized Research Fund for the Doctoral Program of Higher Education of China(Grant No.20134219110002)
文摘Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost fimction. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
文摘In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error.
基金National Natural Science Foundation of China,Grant/Award Number:72204169Department of Science and Technology of Sichuan Province,Grant/Award Number:2021YFS0393。
文摘Background:Studies have shown that heart rate variability(HRV)is a predictor of the prognosis of cardiovascular diseases.Contact heartbeat monitoring equipment is widely used,especially in hospitals,and benefits from the rapidity and accuracy of the detection of physiological health indicators.However,long-term contact with equipment has many adverse effects.The purpose of this study was to improve the accuracy of HRV detection via noncontact equipment,thus enabling HRV to be assessed in various scenarios.Methods:A novel deep learning approach was proposed for measuring heartbeats through camera videos.First,we performed facial segmentation and divided the face into 16 grid cells with different light balance scores.After the trend is filtered by the Hamming window,a transformer-based neural network is used to further filter the signal.Finally,heart rate(HR)and HRV are estimated.Results:We used 1 million synthetic data points for pretraining and a public dataset in combination with a dataset that we constructed for task training.The final results were obtained on a test dataset that we constructed.The accuracy for HR with a low light balance score(0.867-0.983)was greater than that with a high score(0.667-0.750).Our method had higher accuracy in estimating HR than traditional filtering methods(0.167-0.417)and state-of-the-art neural network filtering methods(0.783-0.917)did.The root mean square error of the HRV from the time domain was the lowest,and the correlation index score was the highest for the HRV from the frequency domain estimated by our method compared with those estimated by two neural networks.Conclusions:Light balance,large sample training,and two-stage training can improve the accuracy of HRV estimation.
基金supported by the National Natural Science Foundation of China(Nos. 52205252 and 72331002)the Natural Science Foundation of Sichuan Province, China(No.2023NSFSC0876)the support of the Alexander von Humboldt Foundation of Germany
文摘Interval Uncertainty Propagation(IUP)holds significant importance in quantifying uncertainties in structural outputs when confronted with interval input parameters.In the aviation field,the precise determination of probability models for input parameters of aeronautical structures entails substantial costs in both time and finances.As an alternative,the use of interval variables to describe input parameter uncertainty becomes a pragmatic approach.The complex task of solving the IUP for aeronautical structures,particularly in scenarios marked by pronounced nonlinearity and multiple outputs,necessitates innovative methodologies.This study introduces an efficient deep learning-driven approach to address the challenges associated with IUP.The proposed approach combines the Deep Neural Network(DNN)with intelligent optimization algorithms for dealing with the IUP in aeronautical structures.An inventive extremal value-oriented weighting technique is presented,assigning varying weights to different training samples within the loss function,thereby enhancing the computational accuracy of the DNN in predicting extremal values of structural outputs.Moreover,an adaptive framework is established to strategically balance the global exploration and local exploitation capabilities of the DNN,resulting in a predictive model that is both robust and accurate.To illustrate the effectiveness of the developed approach,various applications are explored,including a high-dimensional numerical example and two aeronautical structures.The obtained results highlight the high computational accuracy and efficiency achieved by the proposed approach,showcasing its potential for addressing complex IUP challenges in aeronautical engineering.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
文摘In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL.
基金the Natural Science Foundation of China (No. 30070211).
文摘A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.
文摘In this article,we comment on the study by Yang et al,which demonstrated significant cross-sectional associations between heart rate variability(HRV)indices,depressive symptoms,and lung function in patients with chronic obstructive pulmonary disease(COPD).Building on these findings,we further explore the underlying mechanisms,particularly inflammatory-autonomic-oxidative stress pathways,as key causal mediators.Moreover,analyzing genetic polymorphisms alongside environmental factors may uncover susceptibility pathways explaining interindividual differences in HRV and comorbidity risk.Additionally,longitudinal studies tracking HRV trajectories could identify thresholds predictive of accelerated lung function decline or cardiovascular events,informing personalized prevention strategies.Integrating longitudinal HRV data with multi-omics biomarkers and machine learning models could enable real-time prediction of depression relapses or COPD exacerbations,facilitating proactive interventions such as personalized biofeedback training or precision anti-inflammatory therapies.By synthesizing these perspectives,this integrative approach promises to advance precision medicine for COPD patients,particularly those with comorbid depression,by addressing both mechanistic insights and clinical translation.