A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,...A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,6 receptors.This identifies their potential threat to public health.However,our understanding of the molecular basis for the switch of receptor preference is still limited.In this study,we employed the random forest algorithm to identify the potentially key amino acid sites within hemagglutinin(HA),which are associated with the receptor binding ability of H9N2 avian influenza virus(AIV).Subsequently,these sites were further verified by receptor binding assays.A total of 12 substitutions in the HA protein(N158D,N158S,A160 N,A160D,A160T,T163I,T163V,V190T,V190A,D193 N,D193G,and N231D)were predicted to prefer binding toα-2,6 receptors.Except for the V190T substitution,the other substitutions were demonstrated to display an affinity for preferential binding toα-2,6 receptors by receptor binding assays.Especially,the A160T substitution caused a significant upregulation of immune-response genes and an increased mortality rate in mice.Our findings provide novel insights into understanding the genetic basis of receptor preference of the H9N2 AIV.展开更多
As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly...As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly and accurately is a significant, popular and meaningful task.Classification methods based on laser-induced breakdown spectroscopy(LIBS) have been reported in recent years. Although LIBS is an advanced detection technology, it is necessary to combine it with some algorithm to reach the goal of rapid and accurate classification. As an important machine learning method, the random forest(RF) algorithm plays a great role in pattern recognition and material classification. This paper introduces a rapid classification method of Al alloy based on LIBS and the RF algorithm. The results show that the best accuracy that can be reached using this method to classify Al alloy samples is 98.59%, the average of which is 98.45%. It also reveals through the relationship laws that the accuracy varies with the number of trees in the RF and the size of the training sample set in the RF. According to the laws, researchers can find out the optimized parameters in the RF algorithm in order to achieve,as expected, a good result. These results prove that LIBS with the RF algorithm can exactly classify Al alloy effectively, precisely and rapidly with high accuracy, which obviously has significant practical value.展开更多
The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. ...The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P 〈 0.001), as well as in all transrectal ultrasound characteristics (P 〈 0.05) except uneven echo (P = 0.609). The random forest model based on age, prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.展开更多
With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorith...With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorithm is one of the more prominent numerical data outlier detection algorithms in recent years.In the process of constructing the isolation tree by the isolated forest algorithm,as the isolation tree is continuously generated,the difference of isolation trees will gradually decrease or even no difference,which will result in the waste of memory and reduced efficiency of outlier detection.And in the constructed isolation trees,some isolation trees cannot detect outlier.In this paper,an improved iForest-based method GA-iForest is proposed.This method optimizes the isolated forest by selecting some better isolation trees according to the detection accuracy and the difference of isolation trees,thereby reducing some duplicate,similar and poor detection isolation trees and improving the accuracy and stability of outlier detection.In the experiment,Ubuntu system and Spark platform are used to build the experiment environment.The outlier datasets provided by ODDS are used as test.According to indicators such as the accuracy,recall rate,ROC curves,AUC and execution time,the performance of the proposed method is evaluated.Experimental results show that the proposed method can not only improve the accuracy and stability of outlier detection,but also reduce the number of isolation trees by 20%-40%compared with the original iForest method.展开更多
In this paper, sixty-eight research articles published between 2000 and 2017 as well as textbooks which employed four classification algorithms: K-Nearest-Neighbor (KNN), Support Vector Machines (SVM), Random Forest (...In this paper, sixty-eight research articles published between 2000 and 2017 as well as textbooks which employed four classification algorithms: K-Nearest-Neighbor (KNN), Support Vector Machines (SVM), Random Forest (RF) and Neural Network (NN) as the main statistical tools were reviewed. The aim was to examine and compare these nonparametric classification methods on the following attributes: robustness to training data, sensitivity to changes, data fitting, stability, ability to handle large data sizes, sensitivity to noise, time invested in parameter tuning, and accuracy. The performances, strengths and shortcomings of each of the algorithms were examined, and finally, a conclusion was arrived at on which one has higher performance. It was evident from the literature reviewed that RF is too sensitive to small changes in the training dataset and is occasionally unstable and tends to overfit in the model. KNN is easy to implement and understand but has a major drawback of becoming significantly slow as the size of the data in use grows, while the ideal value of K for the KNN classifier is difficult to set. SVM and RF are insensitive to noise or overtraining, which shows their ability in dealing with unbalanced data. Larger input datasets will lengthen classification times for NN and KNN more than for SVM and RF. Among these nonparametric classification methods, NN has the potential to become a more widely used classification algorithm, but because of their time-consuming parameter tuning procedure, high level of complexity in computational processing, the numerous types of NN architectures to choose from and the high number of algorithms used for training, most researchers recommend SVM and RF as easier and wieldy used methods which repeatedly achieve results with high accuracies and are often faster to implement.展开更多
Real-time intelligent lithology identification while drilling is vital to realizing downhole closed-loop drilling. The complex and changeable geological environment in the drilling makes lithology identification face ...Real-time intelligent lithology identification while drilling is vital to realizing downhole closed-loop drilling. The complex and changeable geological environment in the drilling makes lithology identification face many challenges. This paper studies the problems of difficult feature information extraction,low precision of thin-layer identification and limited applicability of the model in intelligent lithologic identification. The author tries to improve the comprehensive performance of the lithology identification model from three aspects: data feature extraction, class balance, and model design. A new real-time intelligent lithology identification model of dynamic felling strategy weighted random forest algorithm(DFW-RF) is proposed. According to the feature selection results, gamma ray and 2 MHz phase resistivity are the logging while drilling(LWD) parameters that significantly influence lithology identification. The comprehensive performance of the DFW-RF lithology identification model has been verified in the application of 3 wells in different areas. By comparing the prediction results of five typical lithology identification algorithms, the DFW-RF model has a higher lithology identification accuracy rate and F1 score. This model improves the identification accuracy of thin-layer lithology and is effective and feasible in different geological environments. The DFW-RF model plays a truly efficient role in the realtime intelligent identification of lithologic information in closed-loop drilling and has greater applicability, which is worthy of being widely used in logging interpretation.展开更多
Precise recovery of CoalbedMethane(CBM)based on transparent reconstruction of geological conditions is a branch of intelligent mining.The process of permeability reconstruction,ranging from data perception to real-tim...Precise recovery of CoalbedMethane(CBM)based on transparent reconstruction of geological conditions is a branch of intelligent mining.The process of permeability reconstruction,ranging from data perception to real-time data visualization,is applicable to disaster risk warning and intelligent decision-making on gas drainage.In this study,a machine learning method integrating the Random Forest(RF)and the Genetic Algorithm(GA)was established for permeability prediction in the Xishan Coalfield based on Uniaxial Compressive Strength(UCS),effective stress,temperature and gas pressure.A total of 50 sets of data collected by a self-developed apparatus were used to generate datasets for training and validating models.Statistical measures including the coefficient of determination(R2)and Root Mean Square Error(RMSE)were selected to validate and compare the predictive performances of the single RF model and the hybrid RF–GA model.Furthermore,sensitivity studies were conducted to evaluate the importance of input parameters.The results show that,the proposed RF–GA model is robust in predicting the permeability;UCS is directly correlated to permeability,while all other inputs are inversely related to permeability;the effective stress exerts the greatest impact on permeability based on importance score,followed by the temperature(or gas pressure)and UCS.The partial dependence plots,indicative of marginal utility of each feature in permeability prediction,are in line with experimental results.Thus,the proposed hybrid model(RF–GA)is capable of predicting permeability and thus beneficial to precise CBMrecovery.展开更多
Anomaly classification based on network traffic features is an important task to monitor and detect network intrusion attacks.Network-based intrusion detection systems(NIDSs)using machine learning(ML)methods are effec...Anomaly classification based on network traffic features is an important task to monitor and detect network intrusion attacks.Network-based intrusion detection systems(NIDSs)using machine learning(ML)methods are effective tools for protecting network infrastructures and services from unpredictable and unseen attacks.Among several ML methods,random forest(RF)is a robust method that can be used in ML-based network intrusion detection solutions.However,the minimum number of instances for each split and the number of trees in the forest are two key parameters of RF that can affect classification accuracy.Therefore,optimal parameter selection is a real problem in RF-based anomaly classification of intrusion detection systems.In this paper,we propose to use the genetic algorithm(GA)for selecting the appropriate values of these two parameters,optimizing the RF classifier and improving the classification accuracy of normal and abnormal network traffics.To validate the proposed GA-based RF model,a number of experiments is conducted on two public datasets and evaluated using a set of performance evaluation measures.In these experiments,the accuracy result is compared with the accuracies of baseline ML classifiers in the recent works.Experimental results reveal that the proposed model can avert the uncertainty in selection the values of RF’s parameters,improving the accuracy of anomaly classification in NIDSs without incurring excessive time.展开更多
Estimating the volume growth of forest ecosystems accurately is important for understanding carbon sequestration and achieving carbon neutrality goals.However,the key environmental factors affecting volume growth diff...Estimating the volume growth of forest ecosystems accurately is important for understanding carbon sequestration and achieving carbon neutrality goals.However,the key environmental factors affecting volume growth differ across various scales and plant functional types.This study was,therefore,conducted to estimate the volume growth of Larix and Quercus forests based on national-scale forestry inventory data in China and its influencing factors using random forest algorithms.The results showed that the model performances of volume growth in natural forests(R^(2)=0.65 for Larix and 0.66 for Quercus,respectively)were better than those in planted forests(R^(2)=0.44 for Larix and 0.40 for Quercus,respectively).In both natural and planted forests,the stand age showed a strong relative importance for volume growth(8.6%–66.2%),while the edaphic and climatic variables had a limited relative importance(<6.0%).The relationship between stand age and volume growth was unimodal in natural forests and linear increase in planted Quercus forests.And the specific locations(i.e.,altitude and aspect)of sampling plots exhibited high relative importance for volume growth in planted forests(4.1%–18.2%).Altitude positively affected volume growth in planted Larix forests but controlled volume growth negatively in planted Quercus forests.Similarly,the effects of other environmental factors on volume growth also differed in both stand origins(planted versus natural)and plant functional types(Larix versus Quercus).These results highlighted that the stand age was the most important predictor for volume growth and there were diverse effects of environmental factors on volume growth among stand origins and plant functional types.Our findings will provide a good framework for site-specific recommendations regarding the management practices necessary to maintain the volume growth in China's forest ecosystems.展开更多
This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algori...This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algorithms.The first step is to determine of weights of the input features while using the object-based approach with MRS to processing such images.Given the high number of input features,an automatic method is needed for estimation of this parameter.Moreover,we used the Variable Importance(VI),one of the outputs of the RFC,to determine the importance of each image band.Then,based on this parameter and other required parameters,the image is segmented into some homogenous regions.Finally,the RFC is carried out based on the characteristics of segments for converting them into meaningful objects.The proposed method,as well as,the conventional pixel-based RFC and Support Vector Machine(SVM)method was applied to three different hyperspectral data-sets with various spectral and spatial characteristics.These data were acquired by the HyMap,the Airborne Prism Experiment(APEX),and the Compact Airborne Spectrographic Imager(CASI)hyperspectral sensors.The experimental results show that the proposed method is more consistent for land cover mapping in various areas.The overall classification accuracy(OA),obtained by the proposed method was 95.48,86.57,and 84.29%for the HyMap,the APEX,and the CASI datasets,respectively.Moreover,this method showed better efficiency in comparison to the spectralbased classifications because the OAs of the proposed method was 5.67 and 3.75%higher than the conventional RFC and SVM classifiers,respectively.展开更多
Aiming at the poor location accuracy caused by the harsh and complex underground environment,long strip roadway,limited wireless transmission and sparse anchor nodes,an underground location algorithm based on random f...Aiming at the poor location accuracy caused by the harsh and complex underground environment,long strip roadway,limited wireless transmission and sparse anchor nodes,an underground location algorithm based on random forest and compensation for environmental factors was proposed.Firstly,the underground wireless access point(AP)network model and tunnel environment were analyzed,and the fingerprint location algorithm was built.And then the Received Signal Strength(RSS)was analyzed by Kalman Filter algorithm in the offline sampling and real-time positioning stage.Meanwhile,the target speed constraint condition was introduced to reduce the error caused by environmental factors.The experimental results show that the proposed algorithm solves the problem of insufficient location accuracy and large fluctuation affected by environment when the anchor nodes are sparse.At the same time,the average location accuracy reaches three meters,which can satisfy the application of underground rescue,activity track playback,disaster monitoring and positioning.It has high application value in complex underground environment.展开更多
During construction,the shield linings of tunnels often face the problem of local or overall upward movement after leaving the shield tail in soft soil areas or during some large diameter shield projects.Differential ...During construction,the shield linings of tunnels often face the problem of local or overall upward movement after leaving the shield tail in soft soil areas or during some large diameter shield projects.Differential floating will increase the initial stress on the segments and bolts which is harmful to the service performance of the tunnel.In this study we used a random forest(RF)algorithm combined particle swarm optimization(PSO)and 5-fold cross-validation(5-fold CV)to predict the maximum upward displacement of tunnel linings induced by shield tunnel excavation.The mechanism and factors causing upward movement of the tunnel lining are comprehensively summarized.Twelve input variables were selected according to results from analysis of influencing factors.The prediction performance of two models,PSO-RF and RF(default)were compared.The Gini value was obtained to represent the relative importance of the influencing factors to the upward displacement of linings.The PSO-RF model successfully predicted the maximum upward displacement of the tunnel linings with a low error(mean absolute error(MAE)=4.04 mm,root mean square error(RMSE)=5.67 mm)and high correlation(R^(2)=0.915).The thrust and depth of the tunnel were the most important factors in the prediction model influencing the upward displacement of the tunnel linings.展开更多
基金supported by the National Natural Science Foundation of China(32273037 and 32102636)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030007)+4 种基金Laboratory of Lingnan Modern Agriculture Project(NT2021007)the Guangdong Science and Technology Innovation Leading Talent Program(2019TX05N098)the 111 Center(D20008)the double first-class discipline promotion project(2023B10564003)the Department of Education of Guangdong Province(2019KZDXM004 and 2019KCXTD001).
文摘A switch from avian-typeα-2,3 to human-typeα-2,6 receptors is an essential element for the initiation of a pandemic from an avian influenza virus.Some H9N2 viruses exhibit a preference for binding to human-typeα-2,6 receptors.This identifies their potential threat to public health.However,our understanding of the molecular basis for the switch of receptor preference is still limited.In this study,we employed the random forest algorithm to identify the potentially key amino acid sites within hemagglutinin(HA),which are associated with the receptor binding ability of H9N2 avian influenza virus(AIV).Subsequently,these sites were further verified by receptor binding assays.A total of 12 substitutions in the HA protein(N158D,N158S,A160 N,A160D,A160T,T163I,T163V,V190T,V190A,D193 N,D193G,and N231D)were predicted to prefer binding toα-2,6 receptors.Except for the V190T substitution,the other substitutions were demonstrated to display an affinity for preferential binding toα-2,6 receptors by receptor binding assays.Especially,the A160T substitution caused a significant upregulation of immune-response genes and an increased mortality rate in mice.Our findings provide novel insights into understanding the genetic basis of receptor preference of the H9N2 AIV.
基金supported by National High Technology Research and Development Program of China (863 Program. No. 2013AA102402)
文摘As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly and accurately is a significant, popular and meaningful task.Classification methods based on laser-induced breakdown spectroscopy(LIBS) have been reported in recent years. Although LIBS is an advanced detection technology, it is necessary to combine it with some algorithm to reach the goal of rapid and accurate classification. As an important machine learning method, the random forest(RF) algorithm plays a great role in pattern recognition and material classification. This paper introduces a rapid classification method of Al alloy based on LIBS and the RF algorithm. The results show that the best accuracy that can be reached using this method to classify Al alloy samples is 98.59%, the average of which is 98.45%. It also reveals through the relationship laws that the accuracy varies with the number of trees in the RF and the size of the training sample set in the RF. According to the laws, researchers can find out the optimized parameters in the RF algorithm in order to achieve,as expected, a good result. These results prove that LIBS with the RF algorithm can exactly classify Al alloy effectively, precisely and rapidly with high accuracy, which obviously has significant practical value.
文摘The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P 〈 0.001), as well as in all transrectal ultrasound characteristics (P 〈 0.05) except uneven echo (P = 0.609). The random forest model based on age, prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.
基金supported by the State Grid Liaoning Electric Power Supply CO, LTDthe financial support for the “Key Technology and Application Research of the Self-Service Grid Big Data Governance (No.SGLNXT00YJJS1800110)”
文摘With the development of data age,data quality has become one of the problems that people pay much attention to.As a field of data mining,outlier detection is related to the quality of data.The isolated forest algorithm is one of the more prominent numerical data outlier detection algorithms in recent years.In the process of constructing the isolation tree by the isolated forest algorithm,as the isolation tree is continuously generated,the difference of isolation trees will gradually decrease or even no difference,which will result in the waste of memory and reduced efficiency of outlier detection.And in the constructed isolation trees,some isolation trees cannot detect outlier.In this paper,an improved iForest-based method GA-iForest is proposed.This method optimizes the isolated forest by selecting some better isolation trees according to the detection accuracy and the difference of isolation trees,thereby reducing some duplicate,similar and poor detection isolation trees and improving the accuracy and stability of outlier detection.In the experiment,Ubuntu system and Spark platform are used to build the experiment environment.The outlier datasets provided by ODDS are used as test.According to indicators such as the accuracy,recall rate,ROC curves,AUC and execution time,the performance of the proposed method is evaluated.Experimental results show that the proposed method can not only improve the accuracy and stability of outlier detection,but also reduce the number of isolation trees by 20%-40%compared with the original iForest method.
文摘In this paper, sixty-eight research articles published between 2000 and 2017 as well as textbooks which employed four classification algorithms: K-Nearest-Neighbor (KNN), Support Vector Machines (SVM), Random Forest (RF) and Neural Network (NN) as the main statistical tools were reviewed. The aim was to examine and compare these nonparametric classification methods on the following attributes: robustness to training data, sensitivity to changes, data fitting, stability, ability to handle large data sizes, sensitivity to noise, time invested in parameter tuning, and accuracy. The performances, strengths and shortcomings of each of the algorithms were examined, and finally, a conclusion was arrived at on which one has higher performance. It was evident from the literature reviewed that RF is too sensitive to small changes in the training dataset and is occasionally unstable and tends to overfit in the model. KNN is easy to implement and understand but has a major drawback of becoming significantly slow as the size of the data in use grows, while the ideal value of K for the KNN classifier is difficult to set. SVM and RF are insensitive to noise or overtraining, which shows their ability in dealing with unbalanced data. Larger input datasets will lengthen classification times for NN and KNN more than for SVM and RF. Among these nonparametric classification methods, NN has the potential to become a more widely used classification algorithm, but because of their time-consuming parameter tuning procedure, high level of complexity in computational processing, the numerous types of NN architectures to choose from and the high number of algorithms used for training, most researchers recommend SVM and RF as easier and wieldy used methods which repeatedly achieve results with high accuracies and are often faster to implement.
基金financially supported by the National Natural Science Foundation of China(No.52174001)the National Natural Science Foundation of China(No.52004064)+1 种基金the Hainan Province Science and Technology Special Fund “Research on Real-time Intelligent Sensing Technology for Closed-loop Drilling of Oil and Gas Reservoirs in Deepwater Drilling”(ZDYF2023GXJS012)Heilongjiang Provincial Government and Daqing Oilfield's first batch of the scientific and technological key project “Research on the Construction Technology of Gulong Shale Oil Big Data Analysis System”(DQYT-2022-JS-750)。
文摘Real-time intelligent lithology identification while drilling is vital to realizing downhole closed-loop drilling. The complex and changeable geological environment in the drilling makes lithology identification face many challenges. This paper studies the problems of difficult feature information extraction,low precision of thin-layer identification and limited applicability of the model in intelligent lithologic identification. The author tries to improve the comprehensive performance of the lithology identification model from three aspects: data feature extraction, class balance, and model design. A new real-time intelligent lithology identification model of dynamic felling strategy weighted random forest algorithm(DFW-RF) is proposed. According to the feature selection results, gamma ray and 2 MHz phase resistivity are the logging while drilling(LWD) parameters that significantly influence lithology identification. The comprehensive performance of the DFW-RF lithology identification model has been verified in the application of 3 wells in different areas. By comparing the prediction results of five typical lithology identification algorithms, the DFW-RF model has a higher lithology identification accuracy rate and F1 score. This model improves the identification accuracy of thin-layer lithology and is effective and feasible in different geological environments. The DFW-RF model plays a truly efficient role in the realtime intelligent identification of lithologic information in closed-loop drilling and has greater applicability, which is worthy of being widely used in logging interpretation.
基金This work has been supported by the Fundamental Research Funds for the Central Universities[2017XKZD06].
文摘Precise recovery of CoalbedMethane(CBM)based on transparent reconstruction of geological conditions is a branch of intelligent mining.The process of permeability reconstruction,ranging from data perception to real-time data visualization,is applicable to disaster risk warning and intelligent decision-making on gas drainage.In this study,a machine learning method integrating the Random Forest(RF)and the Genetic Algorithm(GA)was established for permeability prediction in the Xishan Coalfield based on Uniaxial Compressive Strength(UCS),effective stress,temperature and gas pressure.A total of 50 sets of data collected by a self-developed apparatus were used to generate datasets for training and validating models.Statistical measures including the coefficient of determination(R2)and Root Mean Square Error(RMSE)were selected to validate and compare the predictive performances of the single RF model and the hybrid RF–GA model.Furthermore,sensitivity studies were conducted to evaluate the importance of input parameters.The results show that,the proposed RF–GA model is robust in predicting the permeability;UCS is directly correlated to permeability,while all other inputs are inversely related to permeability;the effective stress exerts the greatest impact on permeability based on importance score,followed by the temperature(or gas pressure)and UCS.The partial dependence plots,indicative of marginal utility of each feature in permeability prediction,are in line with experimental results.Thus,the proposed hybrid model(RF–GA)is capable of predicting permeability and thus beneficial to precise CBMrecovery.
文摘Anomaly classification based on network traffic features is an important task to monitor and detect network intrusion attacks.Network-based intrusion detection systems(NIDSs)using machine learning(ML)methods are effective tools for protecting network infrastructures and services from unpredictable and unseen attacks.Among several ML methods,random forest(RF)is a robust method that can be used in ML-based network intrusion detection solutions.However,the minimum number of instances for each split and the number of trees in the forest are two key parameters of RF that can affect classification accuracy.Therefore,optimal parameter selection is a real problem in RF-based anomaly classification of intrusion detection systems.In this paper,we propose to use the genetic algorithm(GA)for selecting the appropriate values of these two parameters,optimizing the RF classifier and improving the classification accuracy of normal and abnormal network traffics.To validate the proposed GA-based RF model,a number of experiments is conducted on two public datasets and evaluated using a set of performance evaluation measures.In these experiments,the accuracy result is compared with the accuracies of baseline ML classifiers in the recent works.Experimental results reveal that the proposed model can avert the uncertainty in selection the values of RF’s parameters,improving the accuracy of anomaly classification in NIDSs without incurring excessive time.
基金supported by the Major Program of the National Natural Science Foundation of China(No.32192434)the Fundamental Research Funds of Chinese Academy of Forestry(No.CAFYBB2019ZD001)the National Key Research and Development Program of China(2016YFD060020602).
文摘Estimating the volume growth of forest ecosystems accurately is important for understanding carbon sequestration and achieving carbon neutrality goals.However,the key environmental factors affecting volume growth differ across various scales and plant functional types.This study was,therefore,conducted to estimate the volume growth of Larix and Quercus forests based on national-scale forestry inventory data in China and its influencing factors using random forest algorithms.The results showed that the model performances of volume growth in natural forests(R^(2)=0.65 for Larix and 0.66 for Quercus,respectively)were better than those in planted forests(R^(2)=0.44 for Larix and 0.40 for Quercus,respectively).In both natural and planted forests,the stand age showed a strong relative importance for volume growth(8.6%–66.2%),while the edaphic and climatic variables had a limited relative importance(<6.0%).The relationship between stand age and volume growth was unimodal in natural forests and linear increase in planted Quercus forests.And the specific locations(i.e.,altitude and aspect)of sampling plots exhibited high relative importance for volume growth in planted forests(4.1%–18.2%).Altitude positively affected volume growth in planted Larix forests but controlled volume growth negatively in planted Quercus forests.Similarly,the effects of other environmental factors on volume growth also differed in both stand origins(planted versus natural)and plant functional types(Larix versus Quercus).These results highlighted that the stand age was the most important predictor for volume growth and there were diverse effects of environmental factors on volume growth among stand origins and plant functional types.Our findings will provide a good framework for site-specific recommendations regarding the management practices necessary to maintain the volume growth in China's forest ecosystems.
文摘This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algorithms.The first step is to determine of weights of the input features while using the object-based approach with MRS to processing such images.Given the high number of input features,an automatic method is needed for estimation of this parameter.Moreover,we used the Variable Importance(VI),one of the outputs of the RFC,to determine the importance of each image band.Then,based on this parameter and other required parameters,the image is segmented into some homogenous regions.Finally,the RFC is carried out based on the characteristics of segments for converting them into meaningful objects.The proposed method,as well as,the conventional pixel-based RFC and Support Vector Machine(SVM)method was applied to three different hyperspectral data-sets with various spectral and spatial characteristics.These data were acquired by the HyMap,the Airborne Prism Experiment(APEX),and the Compact Airborne Spectrographic Imager(CASI)hyperspectral sensors.The experimental results show that the proposed method is more consistent for land cover mapping in various areas.The overall classification accuracy(OA),obtained by the proposed method was 95.48,86.57,and 84.29%for the HyMap,the APEX,and the CASI datasets,respectively.Moreover,this method showed better efficiency in comparison to the spectralbased classifications because the OAs of the proposed method was 5.67 and 3.75%higher than the conventional RFC and SVM classifiers,respectively.
基金The work was supported by Projects of Natural Science Foundational in Higher Education Institutions of Anhui Province(KJ2017A449)Chaohu University’s Innovation and Entrepreneurship Training Program for Provincial College Students in 2019(No.S201910380042)。
文摘Aiming at the poor location accuracy caused by the harsh and complex underground environment,long strip roadway,limited wireless transmission and sparse anchor nodes,an underground location algorithm based on random forest and compensation for environmental factors was proposed.Firstly,the underground wireless access point(AP)network model and tunnel environment were analyzed,and the fingerprint location algorithm was built.And then the Received Signal Strength(RSS)was analyzed by Kalman Filter algorithm in the offline sampling and real-time positioning stage.Meanwhile,the target speed constraint condition was introduced to reduce the error caused by environmental factors.The experimental results show that the proposed algorithm solves the problem of insufficient location accuracy and large fluctuation affected by environment when the anchor nodes are sparse.At the same time,the average location accuracy reaches three meters,which can satisfy the application of underground rescue,activity track playback,disaster monitoring and positioning.It has high application value in complex underground environment.
基金supported by the Basic Science Center Program for Multiphase Evolution in Hyper Gravity of the National Natural Science Foundation of China(No.51988101)the National Natural Science Foundation of China(No.52178306)the Zhejiang Provincial Natural Science Foundation of China(No.LR19E080002).
文摘During construction,the shield linings of tunnels often face the problem of local or overall upward movement after leaving the shield tail in soft soil areas or during some large diameter shield projects.Differential floating will increase the initial stress on the segments and bolts which is harmful to the service performance of the tunnel.In this study we used a random forest(RF)algorithm combined particle swarm optimization(PSO)and 5-fold cross-validation(5-fold CV)to predict the maximum upward displacement of tunnel linings induced by shield tunnel excavation.The mechanism and factors causing upward movement of the tunnel lining are comprehensively summarized.Twelve input variables were selected according to results from analysis of influencing factors.The prediction performance of two models,PSO-RF and RF(default)were compared.The Gini value was obtained to represent the relative importance of the influencing factors to the upward displacement of linings.The PSO-RF model successfully predicted the maximum upward displacement of the tunnel linings with a low error(mean absolute error(MAE)=4.04 mm,root mean square error(RMSE)=5.67 mm)and high correlation(R^(2)=0.915).The thrust and depth of the tunnel were the most important factors in the prediction model influencing the upward displacement of the tunnel linings.