Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so...Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.展开更多
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’...Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of...Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.展开更多
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decisio...Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided.展开更多
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ...Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.展开更多
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u...The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.展开更多
The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artifici...The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artificial Neural Network(ANN),Quadratic Discriminant Analysis(QDA),Linear Discriminant Analysis(LDA),and Naive Bayes(NB),for landslide susceptibility modeling and comparison of their performances.Coupling machine learning algorithms with spatial data types for landslide susceptibility mapping is a vitally important issue.This study was carried out using GIS and R open source software at Abha Basin,Asir Region,Saudi Arabia.First,a total of 243 landslide locations were identified at Abha Basin to prepare the landslide inventory map using different data sources.All the landslide areas were randomly separated into two groups with a ratio of 70%for training and 30%for validating purposes.Twelve landslide-variables were generated for landslide susceptibility modeling,which include altitude,lithology,distance to faults,normalized difference vegetation index(NDVI),landuse/landcover(LULC),distance to roads,slope angle,distance to streams,profile curvature,plan curvature,slope length(LS),and slope-aspect.The area under curve(AUC-ROC)approach has been applied to evaluate,validate,and compare the MLTs performance.The results indicated that AUC values for seven MLTs range from 89.0%for QDA to 95.1%for RF.Our findings showed that the RF(AUC=95.1%)and LDA(AUC=941.7%)have produced the best performances in comparison to other MLTs.The outcome of this study and the landslide susceptibility maps would be useful for environmental protection.展开更多
Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.T...Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.The performance of five commonly used machine learning(ML)algorithms,i.e.back-propagation neural network(BPNN),extreme learning machine(ELM),support vector machine(SVM),random forest(RF)and evolutionary polynomial regression(EPR)in predicting Cc is comprehensively investigated.A database with a total number of 311 datasets including three input variables,i.e.initial void ratio e0,liquid limit water content wL,plasticity index Ip,and one output variable Cc is first established.Genetic algorithm(GA)is used to optimize the hyper-parameters in five ML algorithms,and the average prediction error for the 10-fold cross-validation(CV)sets is set as thefitness function in the GA for enhancing the robustness of ML models.The results indicate that ML models outperform empirical prediction formulations with lower prediction error.RF yields the lowest error followed by BPNN,ELM,EPR and SVM.If the ranges of input variables in the database are large enough,BPNN and RF models are recommended to predict Cc.Furthermore,if the distribution of input variables is continuous,RF model is the best one.Otherwise,EPR model is recommended if the ranges of input variables are small.The predicted correlations between input and output variables using five ML models show great agreement with the physical explanation.展开更多
The risk of rockbursts is one of the main threats in hard coal mines. Compared to other underground mines, the number of factors contributing to the rockburst at underground coal mines is much greater.Factors such as ...The risk of rockbursts is one of the main threats in hard coal mines. Compared to other underground mines, the number of factors contributing to the rockburst at underground coal mines is much greater.Factors such as the coal seam tendency to rockbursts, the thickness of the coal seam, and the stress level in the seam have to be considered, but also the entire coal seam-surrounding rock system has to be evaluated when trying to predict the rockbursts. However, in hard coal mines, there are stroke or stress-stroke rockbursts in which the fracture of a thick layer of sandstone plays an essential role in predicting rockbursts. The occurrence of rockbursts in coal mines is complex, and their prediction is even more difficult than in other mines. In recent years, the interest in machine learning algorithms for solving complex nonlinear problems has increased, which also applies to geosciences. This study attempts to use machine learning algorithms, i.e. neural network, decision tree, random forest, gradient boosting, and extreme gradient boosting(XGB), to assess the rockburst hazard of an active hard coal mine in the Upper Silesian Coal Basin. The rock mass bursting tendency index WTGthat describes the tendency of the seam-surrounding rock system to rockbursts and the anomaly of the vertical stress component were applied for this purpose. Especially, the decision tree and neural network models were proved to be effective in correctly distinguishing rockbursts from tremors, after which the excavation was not damaged. On average, these models correctly classified about 80% of the rockbursts in the testing datasets.展开更多
This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linea...This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.展开更多
The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contribute...The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contributed significantly to the development of this field,these approaches either are limited in problem size or need manual intervention in choosing parameters.To solve these difficulties,many studies have considered learning-based optimization(LBO)algorithms to solve the VRP.This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms.Finally,we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms.展开更多
The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predict...The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.展开更多
Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on f...Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.展开更多
Periodontitis is closely related to many systemic diseases linked by different periodontal pathogens.To unravel the relationship between periodontitis and systemic diseases,it is very important to correctly discrimina...Periodontitis is closely related to many systemic diseases linked by different periodontal pathogens.To unravel the relationship between periodontitis and systemic diseases,it is very important to correctly discriminate major periodontal pathogens.To realize convenient,effcient,and high-accuracy bacterial species classification,the authors use Raman spectroscopy combined with machine learning algorithms to distinguish three major periodontal pathogens Porphyromonas gingivalis(Pg),Fusobacterium nucleatum(Fn),and Aggregatibacter actinomycetemcomitans(Aa).The result shows that this novel method can successfully discriminate the three abovementioned periodontal pathogens.Moreover,the classification accuracies for the three categories of the original data were 94.7%at the sample level and 93.9%at the spectrum level by the machine learning algorithm extra trees.This study provides a fast,simple,and accurate method which is very beneficial to differentiate periodontal pathogens.展开更多
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced...Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.展开更多
Traditional geostatistical estimation techniques have been used predominantly by the mining industry for ore reserve estimation. Determination of mineral reserve has posed considerable challenge to mining engineers du...Traditional geostatistical estimation techniques have been used predominantly by the mining industry for ore reserve estimation. Determination of mineral reserve has posed considerable challenge to mining engineers due to the geological complexities of ore body formation. Extensive research over the years has resulted in the development of several state-of-the-art methods for predictive spatial mapping, which could be used for ore reserve estimation;and recent advances in the use of machine learning algorithms (MLA) have provided a new approach for solving the prob-lem of ore reserve estimation. The focus of the present study was on the use of two MLA for estimating ore reserve: namely, neural networks (NN) and support vector machines (SVM). Application of MLA and the various issues involved with using them for reserve estimation have been elaborated with the help of a complex drill-hole dataset that exhibits the typical properties of sparseness and impreciseness that might be associated with a mining dataset. To investigate the accuracy and applicability of MLA for ore reserve estimation, the generalization ability of NN and SVM was compared with the geostatistical ordinary kriging (OK) method.展开更多
Soil swelling-related disaster is considered as one of the most devastating geo-hazards in modern history.Hence,proper determination of a soil’s ability to expand is very vital for achieving a secure and safe ground ...Soil swelling-related disaster is considered as one of the most devastating geo-hazards in modern history.Hence,proper determination of a soil’s ability to expand is very vital for achieving a secure and safe ground for infrastructures.Accordingly,this study has provided a novel and intelligent approach that enables an improved estimation of swelling by using kernelised machines(Bayesian linear regression(BLR)&bayes point machine(BPM)support vector machine(SVM)and deep-support vector machine(D-SVM));(multiple linear regressor(REG),logistic regressor(LR)and artificial neural network(ANN)),tree-based algorithms such as decision forest(RDF)&boosted trees(BDT).Also,and for the first time,meta-heuristic classifiers incorporating the techniques of voting(VE)and stacking(SE)were utilised.Different independent scenarios of explanatory features’combination that influence soil behaviour in swelling were investigated.Preliminary results indicated BLR as possessing the highest amount of deviation from the predictor variable(the actual swell-strain).REG and BLR performed slightly better than ANN while the meta-heuristic learners(VE and SE)produced the best overall performance(greatest R2 value of 0.94 and RMSE of 0.06%exhibited by VE).CEC,plasticity index and moisture content were the features considered to have the highest level of importance.Kernelized binary classifiers(SVM,D-SVM and BPM)gave better accuracy(average accuracy and recall rate of 0.93 and 0.60)compared to ANN,LR and RDF.Sensitivity-driven diagnostic test indicated that the meta-heuristic models’best performance occurred when ML training was conducted using k-fold validation technique.Finally,it is recommended that the concepts developed herein be deployed during the preliminary phases of a geotechnical or geological site characterisation by using the best performing meta-heuristic models via their background coding resource.展开更多
文摘Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.
文摘Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
文摘Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.
基金financially supported by the National Natural Science Foundation of China(No.52073031)the National Key Research and Development Program of China(Nos.2023YFB3208102,2021YFB3200304)+4 种基金the China National Postdoctoral Program for Innovative Talents(No.BX2021302)the Beijing Nova Program(Nos.Z191100001119047,Z211100002121148)the Fundamental Research Funds for the Central Universities(No.E0EG6801X2)the‘Hundred Talents Program’of the Chinese Academy of Sciencesthe BrainLink program funded by the MSIT through the NRF of Korea(No.RS-2023-00237308).
文摘Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided.
文摘Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province.
基金supported by the National Natural Science Foundation of China(22408227,22238005)the Postdoctoral Research Foundation of China(GZC20231576).
文摘The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes.
文摘The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artificial Neural Network(ANN),Quadratic Discriminant Analysis(QDA),Linear Discriminant Analysis(LDA),and Naive Bayes(NB),for landslide susceptibility modeling and comparison of their performances.Coupling machine learning algorithms with spatial data types for landslide susceptibility mapping is a vitally important issue.This study was carried out using GIS and R open source software at Abha Basin,Asir Region,Saudi Arabia.First,a total of 243 landslide locations were identified at Abha Basin to prepare the landslide inventory map using different data sources.All the landslide areas were randomly separated into two groups with a ratio of 70%for training and 30%for validating purposes.Twelve landslide-variables were generated for landslide susceptibility modeling,which include altitude,lithology,distance to faults,normalized difference vegetation index(NDVI),landuse/landcover(LULC),distance to roads,slope angle,distance to streams,profile curvature,plan curvature,slope length(LS),and slope-aspect.The area under curve(AUC-ROC)approach has been applied to evaluate,validate,and compare the MLTs performance.The results indicated that AUC values for seven MLTs range from 89.0%for QDA to 95.1%for RF.Our findings showed that the RF(AUC=95.1%)and LDA(AUC=941.7%)have produced the best performances in comparison to other MLTs.The outcome of this study and the landslide susceptibility maps would be useful for environmental protection.
基金financial support provided by the RIF project(Grant No.PolyU R5037-18F)from the Research Grants Council(RGC)of Hong Kong is gratefully acknowledged。
文摘Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.The performance of five commonly used machine learning(ML)algorithms,i.e.back-propagation neural network(BPNN),extreme learning machine(ELM),support vector machine(SVM),random forest(RF)and evolutionary polynomial regression(EPR)in predicting Cc is comprehensively investigated.A database with a total number of 311 datasets including three input variables,i.e.initial void ratio e0,liquid limit water content wL,plasticity index Ip,and one output variable Cc is first established.Genetic algorithm(GA)is used to optimize the hyper-parameters in five ML algorithms,and the average prediction error for the 10-fold cross-validation(CV)sets is set as thefitness function in the GA for enhancing the robustness of ML models.The results indicate that ML models outperform empirical prediction formulations with lower prediction error.RF yields the lowest error followed by BPNN,ELM,EPR and SVM.If the ranges of input variables in the database are large enough,BPNN and RF models are recommended to predict Cc.Furthermore,if the distribution of input variables is continuous,RF model is the best one.Otherwise,EPR model is recommended if the ranges of input variables are small.The predicted correlations between input and output variables using five ML models show great agreement with the physical explanation.
基金supported by the Ministry of Science and Higher Education, Republic of Poland (Statutory Activity of the Central Mining Institute, Grant No. 11133010)
文摘The risk of rockbursts is one of the main threats in hard coal mines. Compared to other underground mines, the number of factors contributing to the rockburst at underground coal mines is much greater.Factors such as the coal seam tendency to rockbursts, the thickness of the coal seam, and the stress level in the seam have to be considered, but also the entire coal seam-surrounding rock system has to be evaluated when trying to predict the rockbursts. However, in hard coal mines, there are stroke or stress-stroke rockbursts in which the fracture of a thick layer of sandstone plays an essential role in predicting rockbursts. The occurrence of rockbursts in coal mines is complex, and their prediction is even more difficult than in other mines. In recent years, the interest in machine learning algorithms for solving complex nonlinear problems has increased, which also applies to geosciences. This study attempts to use machine learning algorithms, i.e. neural network, decision tree, random forest, gradient boosting, and extreme gradient boosting(XGB), to assess the rockburst hazard of an active hard coal mine in the Upper Silesian Coal Basin. The rock mass bursting tendency index WTGthat describes the tendency of the seam-surrounding rock system to rockbursts and the anomaly of the vertical stress component were applied for this purpose. Especially, the decision tree and neural network models were proved to be effective in correctly distinguishing rockbursts from tremors, after which the excavation was not damaged. On average, these models correctly classified about 80% of the rockbursts in the testing datasets.
基金supported by the College of Agriculture,Shiraz University(Grant No.97GRC1M271143)funding from the UK Biotechnology and Biological Sciences Research Council(BBSRC)funded by BBSRC grant award BBS/E/C/000I0330–Soil to Nutrition project 3–Sustainable intensification:optimisation at multiple scales。
文摘This investigation assessed the efficacy of 10 widely used machine learning algorithms(MLA)comprising the least absolute shrinkage and selection operator(LASSO),generalized linear model(GLM),stepwise generalized linear model(SGLM),elastic net(ENET),partial least square(PLS),ridge regression,support vector machine(SVM),classification and regression trees(CART),bagged CART,and random forest(RF)for gully erosion susceptibility mapping(GESM)in Iran.The location of 462 previously existing gully erosion sites were mapped through widespread field investigations,of which 70%(323)and 30%(139)of observations were arbitrarily divided for algorithm calibration and validation.Twelve controlling factors for gully erosion,namely,soil texture,annual mean rainfall,digital elevation model(DEM),drainage density,slope,lithology,topographic wetness index(TWI),distance from rivers,aspect,distance from roads,plan curvature,and profile curvature were ranked in terms of their importance using each MLA.The MLA were compared using a training dataset for gully erosion and statistical measures such as RMSE(root mean square error),MAE(mean absolute error),and R-squared.Based on the comparisons among MLA,the RF algorithm exhibited the minimum RMSE and MAE and the maximum value of R-squared,and was therefore selected as the best model.The variable importance evaluation using the RF model revealed that distance from rivers had the highest significance in influencing the occurrence of gully erosion whereas plan curvature had the least importance.According to the GESM generated using RF,most of the study area is predicted to have a low(53.72%)or moderate(29.65%)susceptibility to gully erosion,whereas only a small area is identified to have a high(12.56%)or very high(4.07%)susceptibility.The outcome generated by RF model is validated using the ROC(Receiver Operating Characteristics)curve approach,which returned an area under the curve(AUC)of 0.985,proving the excellent forecasting ability of the model.The GESM prepared using the RF algorithm can aid decision-makers in targeting remedial actions for minimizing the damage caused by gully erosion.
文摘The vehicle routing problem(VRP)is a typical discrete combinatorial optimization problem,and many models and algorithms have been proposed to solve the VRP and its variants.Although existing approaches have contributed significantly to the development of this field,these approaches either are limited in problem size or need manual intervention in choosing parameters.To solve these difficulties,many studies have considered learning-based optimization(LBO)algorithms to solve the VRP.This paper reviews recent advances in this field and divides relevant approaches into end-to-end approaches and step-by-step approaches.We performed a statistical analysis of the reviewed articles from various aspects and designed three experiments to evaluate the performance of four representative LBO algorithms.Finally,we conclude the applicable types of problems for different LBO algorithms and suggest directions in which researchers can improve LBO algorithms.
文摘The Covid-19 epidemic poses a serious public health threat to the world,where people with little or no pre-existing human immunity can be more vulnerable to its effects.Thus,developing surveillance systems for predicting the Covid-19 pandemic at an early stage could save millions of lives.In this study,a deep learning algorithm and a Holt-trend model are proposed to predict the coronavirus.The Long-Short Term Memory(LSTM)and Holttrend algorithms were applied to predict confirmed numbers and death cases.The real time data used has been collected from theWorld Health Organization(WHO).In the proposed research,we have considered three countries to test the proposed model,namely Saudi Arabia,Spain and Italy.The results suggest that the LSTM models show better performance in predicting the cases of coronavirus patients.Standard measure performance Mean squared Error(MSE),Root Mean Squared Error(RMSE),Mean error and correlation are employed to estimate the results of the proposed models.The empirical results of the LSTM,using the correlation metrics,are 99.94%,99.94%and 99.91%in predicting the number of confirmed cases in the three countries.As far as the results of the LSTM model in predicting the number of death of Covid-19,they are 99.86%,98.876%and 99.16%with respect to Saudi Arabia,Italy and Spain respectively.Similarly,the experiment’s results of the Holt-Trend model in predicting the number of confirmed cases of Covid-19,using the correlation metrics,are 99.06%,99.96%and 99.94%,whereas the results of the Holt-Trend model in predicting the number of death cases are 99.80%,99.96%and 99.94%with respect to the Saudi Arabia,Italy and Spain respectively.The empirical results indicate the efficient performance of the presented model in predicting the number of confirmed and death cases of Covid-19 in these countries.Such findings provide better insights regarding the future of Covid-19 this pandemic in general.The results were obtained by applying time series models,which need to be considered for the sake of saving the lives of many people.
文摘Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields,including stock market investment.However,few studies have focused on forecasting daily stock market returns,especially when using powerful machine learning techniques,such as deep neural networks(DNNs),to perform the analyses.DNNs employ various deep learning algorithms based on the combination of network structure,activation function,and model parameters,with their performance depending on the format of the data representation.This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF(ticker symbol:SPY)based on 60 financial and economic features.DNNs and traditional artificial neural networks(ANNs)are then deployed over the entire preprocessed but untransformed dataset,along with two datasets transformed via principal component analysis(PCA),to predict the daily direction of future stock market index returns.While controlling for overfitting,a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000.Moreover,a set of hypothesis testing procedures are implemented on the classification,and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset,as well as several other hybrid machine learning algorithms.In addition,the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested,including in a comparison against two standard benchmarks.
基金funded by the Major Program of Social Science Foundation of Tianjin Municipal Education Commission(2019JWZD53).
文摘Periodontitis is closely related to many systemic diseases linked by different periodontal pathogens.To unravel the relationship between periodontitis and systemic diseases,it is very important to correctly discriminate major periodontal pathogens.To realize convenient,effcient,and high-accuracy bacterial species classification,the authors use Raman spectroscopy combined with machine learning algorithms to distinguish three major periodontal pathogens Porphyromonas gingivalis(Pg),Fusobacterium nucleatum(Fn),and Aggregatibacter actinomycetemcomitans(Aa).The result shows that this novel method can successfully discriminate the three abovementioned periodontal pathogens.Moreover,the classification accuracies for the three categories of the original data were 94.7%at the sample level and 93.9%at the spectrum level by the machine learning algorithm extra trees.This study provides a fast,simple,and accurate method which is very beneficial to differentiate periodontal pathogens.
文摘Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates.
文摘Traditional geostatistical estimation techniques have been used predominantly by the mining industry for ore reserve estimation. Determination of mineral reserve has posed considerable challenge to mining engineers due to the geological complexities of ore body formation. Extensive research over the years has resulted in the development of several state-of-the-art methods for predictive spatial mapping, which could be used for ore reserve estimation;and recent advances in the use of machine learning algorithms (MLA) have provided a new approach for solving the prob-lem of ore reserve estimation. The focus of the present study was on the use of two MLA for estimating ore reserve: namely, neural networks (NN) and support vector machines (SVM). Application of MLA and the various issues involved with using them for reserve estimation have been elaborated with the help of a complex drill-hole dataset that exhibits the typical properties of sparseness and impreciseness that might be associated with a mining dataset. To investigate the accuracy and applicability of MLA for ore reserve estimation, the generalization ability of NN and SVM was compared with the geostatistical ordinary kriging (OK) method.
文摘Soil swelling-related disaster is considered as one of the most devastating geo-hazards in modern history.Hence,proper determination of a soil’s ability to expand is very vital for achieving a secure and safe ground for infrastructures.Accordingly,this study has provided a novel and intelligent approach that enables an improved estimation of swelling by using kernelised machines(Bayesian linear regression(BLR)&bayes point machine(BPM)support vector machine(SVM)and deep-support vector machine(D-SVM));(multiple linear regressor(REG),logistic regressor(LR)and artificial neural network(ANN)),tree-based algorithms such as decision forest(RDF)&boosted trees(BDT).Also,and for the first time,meta-heuristic classifiers incorporating the techniques of voting(VE)and stacking(SE)were utilised.Different independent scenarios of explanatory features’combination that influence soil behaviour in swelling were investigated.Preliminary results indicated BLR as possessing the highest amount of deviation from the predictor variable(the actual swell-strain).REG and BLR performed slightly better than ANN while the meta-heuristic learners(VE and SE)produced the best overall performance(greatest R2 value of 0.94 and RMSE of 0.06%exhibited by VE).CEC,plasticity index and moisture content were the features considered to have the highest level of importance.Kernelized binary classifiers(SVM,D-SVM and BPM)gave better accuracy(average accuracy and recall rate of 0.93 and 0.60)compared to ANN,LR and RDF.Sensitivity-driven diagnostic test indicated that the meta-heuristic models’best performance occurred when ML training was conducted using k-fold validation technique.Finally,it is recommended that the concepts developed herein be deployed during the preliminary phases of a geotechnical or geological site characterisation by using the best performing meta-heuristic models via their background coding resource.