Under global warming,extreme precipitation in the Yellow River Basin(YRB)is increasing,threatening regional development and the environment.This trend highlights the need to improve our ability to predict extreme prec...Under global warming,extreme precipitation in the Yellow River Basin(YRB)is increasing,threatening regional development and the environment.This trend highlights the need to improve our ability to predict extreme precipitation.We used an empirical orthogonal function(EOF)analysis and a method based on year-to-year increments(DY)method to extract the spatial patterns and principal components(PCs)of the DY of total precipition exceeding 95th percentile threshold in summer(R95p TOT-DY)in the YRB from 1962 to 2010.Prediction models of PCs during 2011–2022 were autonomously established using two sets of machine learning(ML)models(Light Gradient Boosting Machine(Light GBM)and Categorical Boosting(Cat Boost)).Without human intervention,these models independently identified significant climatic factors from 114 ocean and circulation indices advanced by 1–6 months.Light GBM predicted time correlation coefficients(TCCs)of 0.53,0.63,and 0.60 for the PCs,while Cat Boost yielded TCCs of 0.50,0.69,and 0.57,respectively.Notably,the PC3 prediction significantly outperformed traditional linear regression(LR).Improved R95p TOT-DY prediction was observed in the YRB's source area and its middle reaches.The ensemble models of Light GBM and Cat Boost showed the best R95p TOT-DY prediction(regional average TCC of 0.51),with a reasonable performance in 12years of forecasting R95p TOT(a multiyear average Pattern Correlation Coefficient(PCC)of 0.36),surpassing the traditional LR method.The SHapley Additive ex Planations(SHAP)analysis attributed PC3 enhancement to the North American Polar Vortex Area Index(NAPVAI).Overall,this ML-based incremental model enhances extreme precipitation prediction,aiding risk assessment and warnings in the YRB.展开更多
As one of the most serious geological disasters in deep underground engineering,rockburst has caused a large number of casualties.However,because of the complex relationship between the inducing factors and rockburst ...As one of the most serious geological disasters in deep underground engineering,rockburst has caused a large number of casualties.However,because of the complex relationship between the inducing factors and rockburst intensity,the problem of rockburst intensity prediction has not been well solved until now.In this study,we collect 292 sets of rockburst data including eight parameters,such as the maximum tangential stress of the surrounding rock σ_(θ),the uniaxial compressive strength of the rockσc,the uniaxial tensile strength of the rock σ_(t),and the strain energy storage index W_(et),etc.from more than 20 underground projects as training sets and establish two new rockburst prediction models based on the kernel extreme learning machine(KELM)combined with the genetic algorithm(KELM-GA)and cross-entropy method(KELM-CEM).To further verify the effect of the two models,ten sets of rockburst data from Shuangjiangkou Hydropower Station are selected for analysis and the results show that new models are more accurate compared with five traditional empirical criteria,especially the model based on KELM-CEM which has the accuracy rate of 90%.Meanwhile,the results of 10 consecutive runs of the model based on KELM-CEM are almost the same,meaning that the model has good stability and reliability for engineering applications.展开更多
Extreme-mass-ratio inspiral(EMRI)signals pose significant challenges to gravitational wave(GW)data analysis,mainly owing to their highly complex waveforms and high-dimensional parameter space.Given their extended time...Extreme-mass-ratio inspiral(EMRI)signals pose significant challenges to gravitational wave(GW)data analysis,mainly owing to their highly complex waveforms and high-dimensional parameter space.Given their extended timescales of months to years and low signal-to-noise ratios,detecting and analyzing EMRIs with confidence generally relies on long-term observations.Besides the length of data,parameter estimation is particularly challenging due to non-local parameter degeneracies,arising from multiple local maxima,as well as flat regions and ridges inherent in the likelihood function.These factors lead to exceptionally high time complexity for parameter analysis based on traditional matched filtering and random sampling methods.To address these challenges,the present study explores a machine learning approach to Bayesian posterior estimation of EMRI signals,leveraging the recently developed flow matching technique based on ordinary differential equation neural networks.To our knowledge,this is also the first instance of applying continuous normalizing flows to EMRI analysis.Our approach demonstrates an increase in computational efficiency by several orders of magnitude compared to the traditional Markov chain Monte Carlo(MCMC)methods,while preserving the unbiasedness of results.However,we note that the posterior distributions generated by FMPE may exhibit broader uncertainty ranges than those obtained through full Bayesian sampling,requiring subsequent refinement via methods such as MCMC.Notably,when searching from large priors,our model rapidly approaches the true values while MCMC struggles to converge to the global maximum.Our findings highlight that machine learning has the potential to efficiently handle the vast EMRI parameter space of up to seventeen dimensions,offering new perspectives for advancing space-based GW detection and GW astronomy.展开更多
To ensure a long-term safety and reliability of electric vehicle and energy storage system,an accurate estimation of the state of health(SOH)for lithium-ion battery is important.In this study,a method for estimating t...To ensure a long-term safety and reliability of electric vehicle and energy storage system,an accurate estimation of the state of health(SOH)for lithium-ion battery is important.In this study,a method for estimating the lithium-ion battery SOH was proposed based on an improved extreme learning machine(ELM).Input weights and hidden layer biases were generated randomly in traditional ELM.To improve the estimation accuracy of ELM,the differential evolution algorithm was used to optimize these parameters in feasible solution spaces.First,incremental capacity curves were obtained by incremental capacity analysis and smoothed by Gaussian filter to extract health interests.Then,the ELM based on differential evolution algorithm(DE-ELM model)was used for a lithium-ion battery SOH estimation.At last,four battery historical aging data sets and one random walk data set were employed to validate the prediction performance of DE-ELM model.Results show that the DE-ELM has a better performance than other studied algorithms in terms of generalization ability.展开更多
Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors an...Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors and performance defects,leading to a decline in product quality and affecting its service life.This study proposes a process parameter optimization method that considers the mechanical properties of printed specimens and production costs.To improve the quality of silicone printing samples and reduce production costs,three machine learning models,kernel extreme learning machine(KELM),support vector regression(SVR),and random forest(RF),were developed to predict these three factors.Training data were obtained through a complete factorial experiment.A new dataset is obtained using the Euclidean distance method,which assigns the elimination factor.It is trained with Bayesian optimization algorithms for parameter optimization,the new dataset is input into the improved double Gaussian extreme learning machine,and finally obtains the improved KELM model.The results showed improved prediction accuracy over SVR and RF.Furthermore,a multi-objective optimization framework was proposed by combining genetic algorithm technology with the improved KELM model.The effectiveness and reasonableness of the model algorithm were verified by comparing the optimized results with the experimental results.展开更多
In propylene polymerization(PP) process, the melt index(MI) is one of the most important quality variables for determining different brands of products and different grades of product quality. Accurate prediction of M...In propylene polymerization(PP) process, the melt index(MI) is one of the most important quality variables for determining different brands of products and different grades of product quality. Accurate prediction of MI is essential for efficient and professional monitoring and control of practical PP processes. This paper presents a novel soft sensor based on extreme learning machine(ELM) and modified gravitational search algorithm(MGSA) to estimate MI from real PP process variables, where the MGSA algorithm is developed to find the best parameters of input weights and hidden biases for ELM. As the comparative basis, the models of ELM, APSO-ELM and GSAELM are also developed respectively. Based on the data from a real PP production plant, a detailed comparison of the models is carried out. The research results show the accuracy and universality of the proposed model and it can be a powerful tool for online MI prediction.展开更多
A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is oppo...A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.展开更多
Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic...Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic energy index were selected as input factors,and burst pit depth as output factor.The rock burst prediction model was proposed according to the genetic algorithms and extreme learning machine.The effect of structural surface was taken into consideration.Based on the engineering examples of tunnels,the observed and collected data were divided into the training set,validation set and prediction set.The training set and validation set were used to train and optimize the model.Parameter optimization results are presented.The hidden layer node was450,and the fitness of the predictions was 0.0197 under the optimal combination of the input weight and offset vector.Then,the optimized model is tested with the prediction set.Results show that the proposed model is effective.The maximum relative error is4.71%,and the average relative error is 3.20%,which proves that the model has practical value in the relative engineering.展开更多
Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable perform...Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable performance of the traditional effluent quality measurements,we propose a selective ensemble extreme learning machine modeling method to enhance the effluent quality predictions.Extreme learning machine algorithm is inserted into a selective ensemble frame as the component model since it runs much faster and provides better generalization performance than other popular learning algorithms.Ensemble extreme learning machine models overcome variations in different trials of simulations for single model.Selective ensemble based on genetic algorithm is used to further exclude some bad components from all the available ensembles in order to reduce the computation complexity and improve the generalization performance.The proposed method is verified with the data from an industrial wastewater treatment plant,located in Shenyang,China.Experimental results show that the proposed method has relatively stronger generalization and higher accuracy than partial least square,neural network partial least square,single extreme learning machine and ensemble extreme learning machine model.展开更多
External short circuit(ESC)of lithium-ion batteries is one of the common and severe electrical failures in electric vehicles.In this study,a novel thermal modelis developed to capture the temperature behavior of batte...External short circuit(ESC)of lithium-ion batteries is one of the common and severe electrical failures in electric vehicles.In this study,a novel thermal modelis developed to capture the temperature behavior of batteries under ESC conditions.Experiments were systematically performed under different battery initial state of charge and ambient temperatures.Based on the experimental results,we employed an extreme learming machine(ELM)-based thermal(ELMT)model to depict battery temperature behavior under ESC,where a lumped-state thermal model was used to replace the activation function of conventional ELMs.To demonstrate the effectiveness of the proposed model,wecompared the ELMT model with a multi-lumped-state thermal(MLT)model parameterized by thegenetic algorithm using the experimental data from various sets of battery cells.It is shown that the ELMT model can achieve higher computa-tional efficiency than the MLT model and better fitting and prediction accuracy,where the average root mean squared error(RMSE)of the fitting is 0.65℃ for the ELMT model and 3.95℃ for the MLT model,and the RMES of the prediction under new data set is 3.97℃ for the ELMT model and 6.11℃ for the MLT model.展开更多
Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performan...Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performance.However,it is a big challenge to estimate engine thrust accurately.To tackle this problem,this paper proposes an ensemble of improved wavelet extreme learning machine(EW-ELM)for aircraft engine thrust estimation.Extreme learning machine(ELM)has been proved as an emerging learning technique with high efficiency.Since the combination of ELM and wavelet theory has the both excellent properties,wavelet activation functions are used in the hidden nodes to enhance non-linearity dealing ability.Besides,as original ELM may result in ill-condition and robustness problems due to the random determination of the parameters for hidden nodes,particle swarm optimization(PSO)algorithm is adopted to select the input weights and hidden biases.Furthermore,the ensemble of the improved wavelet ELM is utilized to construct the relationship between the sensor measurements and thrust.The simulation results verify the effectiveness and efficiency of the developed method and show that aero-engine thrust estimation using EW-ELM can satisfy the requirements of direct thrust control in terms of estimation accuracy and computation time.展开更多
Traditional artificial neural networks (ANN) such as back-propagation neural networks (BPNN) provide good predictions of length-of-day (LOD). However, the determination of network topology is difficult and time ...Traditional artificial neural networks (ANN) such as back-propagation neural networks (BPNN) provide good predictions of length-of-day (LOD). However, the determination of network topology is difficult and time consuming. Therefore, we propose a new type of neural network, extreme learning machine (ELM), to improve the efficiency of LOD predictions. Earth orientation parameters (EOP) C04 time-series provides daily values from International Earth Rotation and Reference Systems Service (IERS), which serves as our database. First, the known predictable effects that can be described by functional models-such as the effects of solid earth, ocean tides, or seasonal atmospheric variations--are removed a priori from the C04 time-series. Only the residuals after the subtraction of a priori model from the observed LOD data (i.e., the irregular and quasi-periodic variations) are employed for training and predictions. The predicted LOD is the sum of a prior extrapolation model and the ELM predictions of the residuals. Different input patterns are discussed and compared to optimize the network solution. The prediction results are analyzed and compared with those obtained by other machine learning-based prediction methods, including BPNN, generalization regression neural networks (GRNN), and adaptive network-based fuzzy inference systems (ANFIS). It is shown that while achieving similar prediction accuracy, the developed method uses much less training time than other methods. Furthermore, to conduct a direct comparison with the existing prediction tech- niques, the mean-absolute-error (MAE) from the proposed method is compared with that from the EOP prediction comparison campaign (EOP PCC). The results indicate that the accuracy of the proposed method is comparable with that of the former techniques. The implementation of the proposed method is simple.展开更多
Objective: By optimizing the extreme learning machine network with particle swarm optimization, we established a syndrome classification and prediction model for primary liver cancer(PLC), classified and predicted the...Objective: By optimizing the extreme learning machine network with particle swarm optimization, we established a syndrome classification and prediction model for primary liver cancer(PLC), classified and predicted the syndrome diagnosis of medical record data for PLC and compared and analyzed the prediction results with different algorithms and the clinical diagnosis results. This paper provides modern technical support for clinical diagnosis and treatment, and improves the objectivity, accuracy and rigor of the classification of traditional Chinese medicine(TCM) syndromes.Methods: From three top-level TCM hospitals in Nanchang, 10,602 electronic medical records from patients with PLC were collected, dating from January 2009 to May 2020. We removed the electronic medical records of 542 cases of syndromes and adopted the cross-validation method in the remaining10,060 electronic medical records, which were randomly divided into a training set and a test set.Based on fuzzy mathematics theory, we quantified the syndrome-related factors of TCM symptoms and signs, and information from the TCM four diagnostic methods. Next, using an extreme learning machine network with particle swarm optimization, we constructed a neural network syndrome classification and prediction model that used "TCM symptoms + signs + tongue diagnosis information + pulse diagnosis information" as input, and PLC syndrome as output. This approach was used to mine the nonlinear relationship between clinical data in electronic medical records and different syndrome types. The accuracy rate of classification was used to compare this model to other machine learning classification models.Results: The classification accuracy rate of the model developed here was 86.26%. The classification accuracy rates of models using support vector machine and Bayesian networks were 82.79% and 85.84%,respectively. The classification accuracy rates of the models for all syndromes in this paper were between82.15% and 93.82%.Conclusion: Compared with the case of data processed using traditional binary inputs, the experiment shows that the medical record data processed by fuzzy mathematics was more accurate, and closer to clinical findings. In addition, the model developed here was more refined, more accurate, and quicker than other classification models. This model provides reliable diagnosis for clinical treatment of PLC and a method to study of the rules of syndrome differentiation and treatment in TCM.展开更多
The electroencephalogram (EEG) signal plays a key role in the diagnosis of epilepsy. Substantial data is generated by the EEG recordings of ambulatory recording systems, and detection of epileptic activity requires a ...The electroencephalogram (EEG) signal plays a key role in the diagnosis of epilepsy. Substantial data is generated by the EEG recordings of ambulatory recording systems, and detection of epileptic activity requires a time-consuming analysis of the complete length of the EEG time series data by a neurology expert. A variety of automatic epilepsy detection systems have been developed during the last ten years. In this paper, we investigate the potential of a recently-proposed statistical measure parameter regarded as Sample Entropy (SampEn), as a method of feature extraction to the task of classifying three different kinds of EEG signals (normal, interictal and ictal) and detecting epileptic seizures. It is known that the value of the SampEn falls suddenly during an epileptic seizure and this fact is utilized in the proposed diagnosis system. Two different kinds of classification models, back-propagation neural network (BPNN) and the recently-developed extreme learning machine (ELM) are tested in this study. Results show that the proposed automatic epilepsy detection system which uses sample entropy (SampEn) as the only input feature, together with extreme learning machine (ELM) classification model, not only achieves high classification accuracy (95.67%) but also very fast speed.展开更多
Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the rece...Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.展开更多
With the development of satellite technology,the satellite imagery of the earth’s surface and the whole surface makes it possible to survey surface resources and master the dynamic changes of the earth with high effi...With the development of satellite technology,the satellite imagery of the earth’s surface and the whole surface makes it possible to survey surface resources and master the dynamic changes of the earth with high efficiency and low consumption.As an important tool for satellite remote sensing image processing,remote sensing image classification has become a hot topic.According to the natural texture characteristics of remote sensing images,this paper combines different texture features with the Extreme Learning Machine,and proposes a new remote sensing image classification algorithm.The experimental tests are carried out through the standard test dataset SAT-4 and SAT-6.Our results show that the proposed method is a simpler and more efficient remote sensing image classification algorithm.It also achieves 99.434%recognition accuracy on SAT-4,which is 1.5%higher than the 97.95%accuracy achieved by DeepSat.At the same time,the recognition accuracy of SAT-6 reaches 99.5728%,which is 5.6%higher than DeepSat’s 93.9%.展开更多
Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.Wit...Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.展开更多
To implement a real-time reduction in NOx,a rapid and accurate model is required.A PLS-ELM model based on the combination of partial least squares(PLS)and the extreme learning machine(ELM)for the establishment of the ...To implement a real-time reduction in NOx,a rapid and accurate model is required.A PLS-ELM model based on the combination of partial least squares(PLS)and the extreme learning machine(ELM)for the establishment of the NOx emission model of utility boilers is proposed.First,the initial input variables of the NOx emission model are determined according to the mechanism analysis.Then,the initial input data is extracted by PLS.Finally,the extracted information is used as the input of the ELM model.A large amount of real data was obtained from the distributed control system(DCS)historical database of a 1 000 MW power plant boiler to train and validate the PLS-ELM model.The modeling performance of the PLS-ELM was compared with that of the back propagation(BP)neural network,support vector machine(SVM)and ELM models.The mean relative errors(MRE)of the PLS-ELM model were 1.58%for the training dataset and 1.69%for the testing dataset.The prediction precision of the PLS-ELM model is higher than those of the BP,SVM and ELM models.The consumption time of the PLS-ELM model is also shorter than that of the BP,SVM and ELM models.展开更多
This study presents a time series prediction model with output self feedback which is implemented based on online sequential extreme learning machine. The output variables derived from multilayer perception can feedba...This study presents a time series prediction model with output self feedback which is implemented based on online sequential extreme learning machine. The output variables derived from multilayer perception can feedback to the network input layer to create a temporal relation between the current node inputs and the lagged node outputs while overcoming the limitation of memory which is a vital port for any time-series prediction application. The model can overcome the static prediction problem with most time series prediction models and can effectively cope with the dynamic properties of time series data. A linear and a nonlinear forecasting algorithms based on online extreme learning machine are proposed to implement the output feedback forecasting model. They are both recursive estimator and have two distinct phases: Predict and Update. The proposed model was tested against different kinds of time series data and the results indicate that the model outperforms the original static model without feedback.展开更多
With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems...With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems,such as large assets,complex and diverse structures,and lack of computing resources.Traditional network intrusion detection systems cannot meet the security needs of IoT applications.In view of this situation,this study applies cloud computing and machine learning to the intrusion detection system of IoT to improve detection performance.Usually,traditional intrusion detection algorithms require considerable time for training,and these intrusion detection algorithms are not suitable for cloud computing due to the limited computing power and storage capacity of cloud nodes;therefore,it is necessary to study intrusion detection algorithms with low weights,short training time,and high detection accuracy for deployment and application on cloud nodes.An appropriate classification algorithm is a primary factor for deploying cloud computing intrusion prevention systems and a prerequisite for the system to respond to intrusion and reduce intrusion threats.This paper discusses the problems related to IoT intrusion prevention in cloud computing environments.Based on the analysis of cloud computing security threats,this study extensively explores IoT intrusion detection,cloud node monitoring,and intrusion response in cloud computing environments by using cloud computing,an improved extreme learning machine,and other methods.We use the Multi-Feature Extraction Extreme Learning Machine(MFE-ELM)algorithm for cloud computing,which adds a multi-feature extraction process to cloud servers,and use the deployed MFE-ELM algorithm on cloud nodes to detect and discover network intrusions to cloud nodes.In our simulation experiments,a classical dataset for intrusion detection is selected as a test,and test steps such as data preprocessing,feature engineering,model training,and result analysis are performed.The experimental results show that the proposed algorithm can effectively detect and identify most network data packets with good model performance and achieve efficient intrusion detection for heterogeneous data of the IoT from cloud nodes.Furthermore,it can enable the cloud server to discover nodes with serious security threats in the cloud cluster in real time,so that further security protection measures can be taken to obtain the optimal intrusion response strategy for the cloud cluster.展开更多
基金supported by the National Science Foundation of China(Grant No.42041004)。
文摘Under global warming,extreme precipitation in the Yellow River Basin(YRB)is increasing,threatening regional development and the environment.This trend highlights the need to improve our ability to predict extreme precipitation.We used an empirical orthogonal function(EOF)analysis and a method based on year-to-year increments(DY)method to extract the spatial patterns and principal components(PCs)of the DY of total precipition exceeding 95th percentile threshold in summer(R95p TOT-DY)in the YRB from 1962 to 2010.Prediction models of PCs during 2011–2022 were autonomously established using two sets of machine learning(ML)models(Light Gradient Boosting Machine(Light GBM)and Categorical Boosting(Cat Boost)).Without human intervention,these models independently identified significant climatic factors from 114 ocean and circulation indices advanced by 1–6 months.Light GBM predicted time correlation coefficients(TCCs)of 0.53,0.63,and 0.60 for the PCs,while Cat Boost yielded TCCs of 0.50,0.69,and 0.57,respectively.Notably,the PC3 prediction significantly outperformed traditional linear regression(LR).Improved R95p TOT-DY prediction was observed in the YRB's source area and its middle reaches.The ensemble models of Light GBM and Cat Boost showed the best R95p TOT-DY prediction(regional average TCC of 0.51),with a reasonable performance in 12years of forecasting R95p TOT(a multiyear average Pattern Correlation Coefficient(PCC)of 0.36),surpassing the traditional LR method.The SHapley Additive ex Planations(SHAP)analysis attributed PC3 enhancement to the North American Polar Vortex Area Index(NAPVAI).Overall,this ML-based incremental model enhances extreme precipitation prediction,aiding risk assessment and warnings in the YRB.
基金funded by National Natural Science Foundation of China(Grants Nos.41825018 and 42141009)the Second Tibetan Plateau Scientific Expedition and Research Program(Grants No.2019QZKK0904)。
文摘As one of the most serious geological disasters in deep underground engineering,rockburst has caused a large number of casualties.However,because of the complex relationship between the inducing factors and rockburst intensity,the problem of rockburst intensity prediction has not been well solved until now.In this study,we collect 292 sets of rockburst data including eight parameters,such as the maximum tangential stress of the surrounding rock σ_(θ),the uniaxial compressive strength of the rockσc,the uniaxial tensile strength of the rock σ_(t),and the strain energy storage index W_(et),etc.from more than 20 underground projects as training sets and establish two new rockburst prediction models based on the kernel extreme learning machine(KELM)combined with the genetic algorithm(KELM-GA)and cross-entropy method(KELM-CEM).To further verify the effect of the two models,ten sets of rockburst data from Shuangjiangkou Hydropower Station are selected for analysis and the results show that new models are more accurate compared with five traditional empirical criteria,especially the model based on KELM-CEM which has the accuracy rate of 90%.Meanwhile,the results of 10 consecutive runs of the model based on KELM-CEM are almost the same,meaning that the model has good stability and reliability for engineering applications.
基金supported by the National Key Research and Development Program of China(Grant Nos.2021YFC2201901,2021YFC2203004,2020YFC2200100 and 2021YFC2201903)International Partnership Program of the Chinese Academy of Sciences(Grant No.025GJHZ2023106GC)+4 种基金the financial support from Brazilian agencies Funda??o de AmparoàPesquisa do Estado de S?o Paulo(FAPESP)Funda??o de Amparoà Pesquisa do Estado do Rio Grande do Sul(FAPERGS)Fundacao de Amparoà Pesquisa do Estado do Rio de Janeiro(FAPERJ)Conselho Nacional de Desenvolvimento Científico e Tecnológico(CNPq)Coordenacao de Aperfeicoamento de Pessoal de Nível Superior(CAPES)。
文摘Extreme-mass-ratio inspiral(EMRI)signals pose significant challenges to gravitational wave(GW)data analysis,mainly owing to their highly complex waveforms and high-dimensional parameter space.Given their extended timescales of months to years and low signal-to-noise ratios,detecting and analyzing EMRIs with confidence generally relies on long-term observations.Besides the length of data,parameter estimation is particularly challenging due to non-local parameter degeneracies,arising from multiple local maxima,as well as flat regions and ridges inherent in the likelihood function.These factors lead to exceptionally high time complexity for parameter analysis based on traditional matched filtering and random sampling methods.To address these challenges,the present study explores a machine learning approach to Bayesian posterior estimation of EMRI signals,leveraging the recently developed flow matching technique based on ordinary differential equation neural networks.To our knowledge,this is also the first instance of applying continuous normalizing flows to EMRI analysis.Our approach demonstrates an increase in computational efficiency by several orders of magnitude compared to the traditional Markov chain Monte Carlo(MCMC)methods,while preserving the unbiasedness of results.However,we note that the posterior distributions generated by FMPE may exhibit broader uncertainty ranges than those obtained through full Bayesian sampling,requiring subsequent refinement via methods such as MCMC.Notably,when searching from large priors,our model rapidly approaches the true values while MCMC struggles to converge to the global maximum.Our findings highlight that machine learning has the potential to efficiently handle the vast EMRI parameter space of up to seventeen dimensions,offering new perspectives for advancing space-based GW detection and GW astronomy.
文摘To ensure a long-term safety and reliability of electric vehicle and energy storage system,an accurate estimation of the state of health(SOH)for lithium-ion battery is important.In this study,a method for estimating the lithium-ion battery SOH was proposed based on an improved extreme learning machine(ELM).Input weights and hidden layer biases were generated randomly in traditional ELM.To improve the estimation accuracy of ELM,the differential evolution algorithm was used to optimize these parameters in feasible solution spaces.First,incremental capacity curves were obtained by incremental capacity analysis and smoothed by Gaussian filter to extract health interests.Then,the ELM based on differential evolution algorithm(DE-ELM model)was used for a lithium-ion battery SOH estimation.At last,four battery historical aging data sets and one random walk data set were employed to validate the prediction performance of DE-ELM model.Results show that the DE-ELM has a better performance than other studied algorithms in terms of generalization ability.
基金supported by the National Key R&D Program of China(No.2022YFA1005204l)。
文摘Silicone material extrusion(MEX)is widely used for processing liquids and pastes.Owing to the uneven linewidth and elastic extrusion deformation caused by material accumulation,products may exhibit geometric errors and performance defects,leading to a decline in product quality and affecting its service life.This study proposes a process parameter optimization method that considers the mechanical properties of printed specimens and production costs.To improve the quality of silicone printing samples and reduce production costs,three machine learning models,kernel extreme learning machine(KELM),support vector regression(SVR),and random forest(RF),were developed to predict these three factors.Training data were obtained through a complete factorial experiment.A new dataset is obtained using the Euclidean distance method,which assigns the elimination factor.It is trained with Bayesian optimization algorithms for parameter optimization,the new dataset is input into the improved double Gaussian extreme learning machine,and finally obtains the improved KELM model.The results showed improved prediction accuracy over SVR and RF.Furthermore,a multi-objective optimization framework was proposed by combining genetic algorithm technology with the improved KELM model.The effectiveness and reasonableness of the model algorithm were verified by comparing the optimized results with the experimental results.
基金Supported by the Major Program of National Natural Science Foundation of China(61590921)the Natural Science Foundation of Zhejiang Province(Y16B040003)+1 种基金Shanghai Aerospace Science and Technology Innovation Fund(E11501)Aerospace Science and Technology Innovation Fund of China,Aerospace Science and Technology Corporation(E11601)
文摘In propylene polymerization(PP) process, the melt index(MI) is one of the most important quality variables for determining different brands of products and different grades of product quality. Accurate prediction of MI is essential for efficient and professional monitoring and control of practical PP processes. This paper presents a novel soft sensor based on extreme learning machine(ELM) and modified gravitational search algorithm(MGSA) to estimate MI from real PP process variables, where the MGSA algorithm is developed to find the best parameters of input weights and hidden biases for ELM. As the comparative basis, the models of ELM, APSO-ELM and GSAELM are also developed respectively. Based on the data from a real PP production plant, a detailed comparison of the models is carried out. The research results show the accuracy and universality of the proposed model and it can be a powerful tool for online MI prediction.
基金supported by the National Natural Science Foundation of China(51006052)the NUST Outstanding Scholar Supporting Program
文摘A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.
基金Project(2013CB036004)supported by the National Basic Research Program of ChinaProject(51378510)supported by the National Natural Science Foundation of China
文摘Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic energy index were selected as input factors,and burst pit depth as output factor.The rock burst prediction model was proposed according to the genetic algorithms and extreme learning machine.The effect of structural surface was taken into consideration.Based on the engineering examples of tunnels,the observed and collected data were divided into the training set,validation set and prediction set.The training set and validation set were used to train and optimize the model.Parameter optimization results are presented.The hidden layer node was450,and the fitness of the predictions was 0.0197 under the optimal combination of the input weight and offset vector.Then,the optimized model is tested with the prediction set.Results show that the proposed model is effective.The maximum relative error is4.71%,and the average relative error is 3.20%,which proves that the model has practical value in the relative engineering.
基金supported by National Natural Science Foundation of China (Nos. 61203102 and 60874057)Postdoctoral Science Foundation of China (No. 20100471464)
文摘Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable performance of the traditional effluent quality measurements,we propose a selective ensemble extreme learning machine modeling method to enhance the effluent quality predictions.Extreme learning machine algorithm is inserted into a selective ensemble frame as the component model since it runs much faster and provides better generalization performance than other popular learning algorithms.Ensemble extreme learning machine models overcome variations in different trials of simulations for single model.Selective ensemble based on genetic algorithm is used to further exclude some bad components from all the available ensembles in order to reduce the computation complexity and improve the generalization performance.The proposed method is verified with the data from an industrial wastewater treatment plant,located in Shenyang,China.Experimental results show that the proposed method has relatively stronger generalization and higher accuracy than partial least square,neural network partial least square,single extreme learning machine and ensemble extreme learning machine model.
基金support by the National Key Researchand Development Program of China(2018YFBO104100).
文摘External short circuit(ESC)of lithium-ion batteries is one of the common and severe electrical failures in electric vehicles.In this study,a novel thermal modelis developed to capture the temperature behavior of batteries under ESC conditions.Experiments were systematically performed under different battery initial state of charge and ambient temperatures.Based on the experimental results,we employed an extreme learming machine(ELM)-based thermal(ELMT)model to depict battery temperature behavior under ESC,where a lumped-state thermal model was used to replace the activation function of conventional ELMs.To demonstrate the effectiveness of the proposed model,wecompared the ELMT model with a multi-lumped-state thermal(MLT)model parameterized by thegenetic algorithm using the experimental data from various sets of battery cells.It is shown that the ELMT model can achieve higher computa-tional efficiency than the MLT model and better fitting and prediction accuracy,where the average root mean squared error(RMSE)of the fitting is 0.65℃ for the ELMT model and 3.95℃ for the MLT model,and the RMES of the prediction under new data set is 3.97℃ for the ELMT model and 6.11℃ for the MLT model.
基金supported by the National Natural Science Foundation of China (Nos.51176075,51576097)the Fouding of Jiangsu Innovation Program for Graduate Education(No.KYLX_0305)
文摘Aero-engine direct thrust control can not only improve the thrust control precision but also save the operating cost by reducing the reserved margin in design and making full use of aircraft engine potential performance.However,it is a big challenge to estimate engine thrust accurately.To tackle this problem,this paper proposes an ensemble of improved wavelet extreme learning machine(EW-ELM)for aircraft engine thrust estimation.Extreme learning machine(ELM)has been proved as an emerging learning technique with high efficiency.Since the combination of ELM and wavelet theory has the both excellent properties,wavelet activation functions are used in the hidden nodes to enhance non-linearity dealing ability.Besides,as original ELM may result in ill-condition and robustness problems due to the random determination of the parameters for hidden nodes,particle swarm optimization(PSO)algorithm is adopted to select the input weights and hidden biases.Furthermore,the ensemble of the improved wavelet ELM is utilized to construct the relationship between the sensor measurements and thrust.The simulation results verify the effectiveness and efficiency of the developed method and show that aero-engine thrust estimation using EW-ELM can satisfy the requirements of direct thrust control in terms of estimation accuracy and computation time.
基金supported by the West Light Foundation of the Chinese Academy of Sciences
文摘Traditional artificial neural networks (ANN) such as back-propagation neural networks (BPNN) provide good predictions of length-of-day (LOD). However, the determination of network topology is difficult and time consuming. Therefore, we propose a new type of neural network, extreme learning machine (ELM), to improve the efficiency of LOD predictions. Earth orientation parameters (EOP) C04 time-series provides daily values from International Earth Rotation and Reference Systems Service (IERS), which serves as our database. First, the known predictable effects that can be described by functional models-such as the effects of solid earth, ocean tides, or seasonal atmospheric variations--are removed a priori from the C04 time-series. Only the residuals after the subtraction of a priori model from the observed LOD data (i.e., the irregular and quasi-periodic variations) are employed for training and predictions. The predicted LOD is the sum of a prior extrapolation model and the ELM predictions of the residuals. Different input patterns are discussed and compared to optimize the network solution. The prediction results are analyzed and compared with those obtained by other machine learning-based prediction methods, including BPNN, generalization regression neural networks (GRNN), and adaptive network-based fuzzy inference systems (ANFIS). It is shown that while achieving similar prediction accuracy, the developed method uses much less training time than other methods. Furthermore, to conduct a direct comparison with the existing prediction tech- niques, the mean-absolute-error (MAE) from the proposed method is compared with that from the EOP prediction comparison campaign (EOP PCC). The results indicate that the accuracy of the proposed method is comparable with that of the former techniques. The implementation of the proposed method is simple.
基金financially supported by the National Natural Science Foundation (No. 81660727)。
文摘Objective: By optimizing the extreme learning machine network with particle swarm optimization, we established a syndrome classification and prediction model for primary liver cancer(PLC), classified and predicted the syndrome diagnosis of medical record data for PLC and compared and analyzed the prediction results with different algorithms and the clinical diagnosis results. This paper provides modern technical support for clinical diagnosis and treatment, and improves the objectivity, accuracy and rigor of the classification of traditional Chinese medicine(TCM) syndromes.Methods: From three top-level TCM hospitals in Nanchang, 10,602 electronic medical records from patients with PLC were collected, dating from January 2009 to May 2020. We removed the electronic medical records of 542 cases of syndromes and adopted the cross-validation method in the remaining10,060 electronic medical records, which were randomly divided into a training set and a test set.Based on fuzzy mathematics theory, we quantified the syndrome-related factors of TCM symptoms and signs, and information from the TCM four diagnostic methods. Next, using an extreme learning machine network with particle swarm optimization, we constructed a neural network syndrome classification and prediction model that used "TCM symptoms + signs + tongue diagnosis information + pulse diagnosis information" as input, and PLC syndrome as output. This approach was used to mine the nonlinear relationship between clinical data in electronic medical records and different syndrome types. The accuracy rate of classification was used to compare this model to other machine learning classification models.Results: The classification accuracy rate of the model developed here was 86.26%. The classification accuracy rates of models using support vector machine and Bayesian networks were 82.79% and 85.84%,respectively. The classification accuracy rates of the models for all syndromes in this paper were between82.15% and 93.82%.Conclusion: Compared with the case of data processed using traditional binary inputs, the experiment shows that the medical record data processed by fuzzy mathematics was more accurate, and closer to clinical findings. In addition, the model developed here was more refined, more accurate, and quicker than other classification models. This model provides reliable diagnosis for clinical treatment of PLC and a method to study of the rules of syndrome differentiation and treatment in TCM.
文摘The electroencephalogram (EEG) signal plays a key role in the diagnosis of epilepsy. Substantial data is generated by the EEG recordings of ambulatory recording systems, and detection of epileptic activity requires a time-consuming analysis of the complete length of the EEG time series data by a neurology expert. A variety of automatic epilepsy detection systems have been developed during the last ten years. In this paper, we investigate the potential of a recently-proposed statistical measure parameter regarded as Sample Entropy (SampEn), as a method of feature extraction to the task of classifying three different kinds of EEG signals (normal, interictal and ictal) and detecting epileptic seizures. It is known that the value of the SampEn falls suddenly during an epileptic seizure and this fact is utilized in the proposed diagnosis system. Two different kinds of classification models, back-propagation neural network (BPNN) and the recently-developed extreme learning machine (ELM) are tested in this study. Results show that the proposed automatic epilepsy detection system which uses sample entropy (SampEn) as the only input feature, together with extreme learning machine (ELM) classification model, not only achieves high classification accuracy (95.67%) but also very fast speed.
基金supported in part by the National Natural Science Foundation of China(U2001213 and 61971191)in part by the Beijing Natural Science Foundation under Grant L182018 and L201011+2 种基金in part by National Key Research and Development Project(2020YFB1807204)in part by the Key project of Natural Science Foundation of Jiangxi Province(20202ACBL202006)in part by the Innovation Fund Designated for Graduate Students of Jiangxi Province(YC2020-S321)。
文摘Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.
基金This work was supported in part by national science foundation project of P.R.China under Grant No.61701554State Language Commission Key Project(ZDl135-39)+1 种基金First class courses(Digital Image Processing:KC2066)MUC 111 Project,Ministry of Education Collaborative Education Project(201901056009,201901160059,201901238038).
文摘With the development of satellite technology,the satellite imagery of the earth’s surface and the whole surface makes it possible to survey surface resources and master the dynamic changes of the earth with high efficiency and low consumption.As an important tool for satellite remote sensing image processing,remote sensing image classification has become a hot topic.According to the natural texture characteristics of remote sensing images,this paper combines different texture features with the Extreme Learning Machine,and proposes a new remote sensing image classification algorithm.The experimental tests are carried out through the standard test dataset SAT-4 and SAT-6.Our results show that the proposed method is a simpler and more efficient remote sensing image classification algorithm.It also achieves 99.434%recognition accuracy on SAT-4,which is 1.5%higher than the 97.95%accuracy achieved by DeepSat.At the same time,the recognition accuracy of SAT-6 reaches 99.5728%,which is 5.6%higher than DeepSat’s 93.9%.
基金supported by the National Natural Science Foundation of China(6177340561751312)the Major Scientific and Technological Innovation Projects of Shandong Province(2019JZZY020123)。
文摘Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.
基金The National Natural Science Foundation of China(No.71471060)Natural Science Foundation of Hebei Province(No.E2018502111)
文摘To implement a real-time reduction in NOx,a rapid and accurate model is required.A PLS-ELM model based on the combination of partial least squares(PLS)and the extreme learning machine(ELM)for the establishment of the NOx emission model of utility boilers is proposed.First,the initial input variables of the NOx emission model are determined according to the mechanism analysis.Then,the initial input data is extracted by PLS.Finally,the extracted information is used as the input of the ELM model.A large amount of real data was obtained from the distributed control system(DCS)historical database of a 1 000 MW power plant boiler to train and validate the PLS-ELM model.The modeling performance of the PLS-ELM was compared with that of the back propagation(BP)neural network,support vector machine(SVM)and ELM models.The mean relative errors(MRE)of the PLS-ELM model were 1.58%for the training dataset and 1.69%for the testing dataset.The prediction precision of the PLS-ELM model is higher than those of the BP,SVM and ELM models.The consumption time of the PLS-ELM model is also shorter than that of the BP,SVM and ELM models.
基金Foundation item: the National Natural Science Foundation of China (No. 61203337)
文摘This study presents a time series prediction model with output self feedback which is implemented based on online sequential extreme learning machine. The output variables derived from multilayer perception can feedback to the network input layer to create a temporal relation between the current node inputs and the lagged node outputs while overcoming the limitation of memory which is a vital port for any time-series prediction application. The model can overcome the static prediction problem with most time series prediction models and can effectively cope with the dynamic properties of time series data. A linear and a nonlinear forecasting algorithms based on online extreme learning machine are proposed to implement the output feedback forecasting model. They are both recursive estimator and have two distinct phases: Predict and Update. The proposed model was tested against different kinds of time series data and the results indicate that the model outperforms the original static model without feedback.
基金funded by the Key Research and Development plan of Jiangsu Province (Social Development)No.BE20217162Jiangsu Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project No.NJ2021-19.
文摘With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems,such as large assets,complex and diverse structures,and lack of computing resources.Traditional network intrusion detection systems cannot meet the security needs of IoT applications.In view of this situation,this study applies cloud computing and machine learning to the intrusion detection system of IoT to improve detection performance.Usually,traditional intrusion detection algorithms require considerable time for training,and these intrusion detection algorithms are not suitable for cloud computing due to the limited computing power and storage capacity of cloud nodes;therefore,it is necessary to study intrusion detection algorithms with low weights,short training time,and high detection accuracy for deployment and application on cloud nodes.An appropriate classification algorithm is a primary factor for deploying cloud computing intrusion prevention systems and a prerequisite for the system to respond to intrusion and reduce intrusion threats.This paper discusses the problems related to IoT intrusion prevention in cloud computing environments.Based on the analysis of cloud computing security threats,this study extensively explores IoT intrusion detection,cloud node monitoring,and intrusion response in cloud computing environments by using cloud computing,an improved extreme learning machine,and other methods.We use the Multi-Feature Extraction Extreme Learning Machine(MFE-ELM)algorithm for cloud computing,which adds a multi-feature extraction process to cloud servers,and use the deployed MFE-ELM algorithm on cloud nodes to detect and discover network intrusions to cloud nodes.In our simulation experiments,a classical dataset for intrusion detection is selected as a test,and test steps such as data preprocessing,feature engineering,model training,and result analysis are performed.The experimental results show that the proposed algorithm can effectively detect and identify most network data packets with good model performance and achieve efficient intrusion detection for heterogeneous data of the IoT from cloud nodes.Furthermore,it can enable the cloud server to discover nodes with serious security threats in the cloud cluster in real time,so that further security protection measures can be taken to obtain the optimal intrusion response strategy for the cloud cluster.