期刊文献+
共找到9,268篇文章
< 1 2 250 >
每页显示 20 50 100
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
1
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Research on Bearing Fault Diagnosis Method Based on Deep Learning 被引量:1
2
作者 Ting Zheng 《Journal of Electronic Research and Application》 2025年第1期1-6,共6页
Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial i... Bearing is an indispensable key component in mechanical equipment,and its working state is directly related to the stability and safety of the whole equipment.In recent years,with the rapid development of artificial intelligence technology,especially the breakthrough of deep learning technology,it provides a new idea for bearing fault diagnosis.Deep learning can automatically learn features from a large amount of data,has a strong nonlinear modeling ability,and can effectively solve the problems existing in traditional methods.Aiming at the key problems in bearing fault diagnosis,this paper studies the fault diagnosis method based on deep learning,which not only provides a new solution for bearing fault diagnosis but also provides a reference for the application of deep learning in other mechanical fault diagnosis fields. 展开更多
关键词 Deep learning Bearing failure Diagnostic methods
在线阅读 下载PDF
Methodology for Detecting Non-Technical Energy Losses Using an Ensemble of Machine Learning Algorithms
3
作者 Irbek Morgoev Roman Klyuev Angelika Morgoeva 《Computer Modeling in Engineering & Sciences》 2025年第5期1381-1399,共19页
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of... Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry. 展开更多
关键词 Non-technical losses smart grid machine learning electricity theft FRAUD ensemble algorithm hybrid method forecasting classification supervised learning
在线阅读 下载PDF
The Acoustic Attenuation Prediction for Seafloor Sediment Based on in-situ Data and Machine Learning Methods
4
作者 WANG Jingqiang HOU Zhengyu +6 位作者 CHEN Yinglin LI Guanbao KAN Guangming XIAO Peng LI Zhenglin MO Dinghao HUANG Jingyi 《Journal of Ocean University of China》 2025年第1期95-102,共8页
Accurate acquisition and prediction of acoustic parameters of seabed sediments are crucial in marine sound propagation research.While the relationship between sound velocity and physical properties of sediment has bee... Accurate acquisition and prediction of acoustic parameters of seabed sediments are crucial in marine sound propagation research.While the relationship between sound velocity and physical properties of sediment has been extensively studied,there is still no consensus on the correlation between acoustic attenuation coefficient and sediment physical properties.Predicting the acoustic attenuation coefficient remains a challenging issue in sedimentary acoustic research.In this study,we propose a prediction method for the acoustic attenuation coefficient using machine learning algorithms,specifically the random forest(RF),support vector machine(SVR),and convolutional neural network(CNN)algorithms.We utilized the acoustic attenuation coefficient and sediment particle size data from 52 stations as training parameters,with the particle size parameters as the input feature matrix,and measured acoustic attenuation as the training label to validate the attenuation prediction model.Our results indicate that the error of the attenuation prediction model is small.Among the three models,the RF model exhibited the lowest prediction error,with a mean squared error of 0.8232,mean absolute error of 0.6613,and root mean squared error of 0.9073.Additionally,when we applied the models to predict the data collected at different times in the same region,we found that the models developed in this study also demonstrated a certain level of reliability in real prediction scenarios.Our approach demonstrates that constructing a sediment acoustic characteristics model based on machine learning is feasible to a certain extent and offers a novel perspective for studying sediment acoustic properties. 展开更多
关键词 in-situ measurement ATTENUATION seafloor sediment machine learning methods
在线阅读 下载PDF
Overview of Efficient Numerical Computing Methods Based on Deep Learning
5
作者 Kejun Yang 《Journal of Electronic Research and Application》 2025年第2期117-124,共8页
This article reviews the application and progress of deep learning in efficient numerical computing methods.Deep learning,as an important branch of machine learning,provides new ideas for numerical computation by cons... This article reviews the application and progress of deep learning in efficient numerical computing methods.Deep learning,as an important branch of machine learning,provides new ideas for numerical computation by constructing multi-layer neural networks to simulate the learning process of the human brain.The article explores the application of deep learning in solving partial differential equations,optimizing problems,and data-driven modeling,and analyzes its advantages in computational efficiency,accuracy,and adaptability.At the same time,this article also points out the challenges faced by deep learning numerical computation methods in terms of computational efficiency,interpretability,and generalization ability,and proposes strategies and future development directions for integrating with traditional numerical methods. 展开更多
关键词 Deep learning Efficient numerical value method of calculation
在线阅读 下载PDF
An Automatic Damage Detection Method Based on Adaptive Theory-Assisted Reinforcement Learning
6
作者 Chengwen Zhang Qing Chun Yijie Lin 《Engineering》 2025年第7期188-202,共15页
Current damage detection methods based on model updating and sensitivity Jacobian matrixes show a low convergence ratio and computational efficiency for online calculations.The aim of this paper is to construct a real... Current damage detection methods based on model updating and sensitivity Jacobian matrixes show a low convergence ratio and computational efficiency for online calculations.The aim of this paper is to construct a real-time automated damage detection method by developing a theory-assisted adaptive mutiagent twin delayed deep deterministic(TA2-MATD3)policy gradient algorithm.First,the theoretical framework of reinforcement-learning-driven damage detection is established.To address the disadvantages of traditional mutiagent twin delayed deep deterministic(MATD3)method,the theory-assisted mechanism and the adaptive experience playback mechanism are introduced.Moreover,a historical residential house built in 1889 was taken as an example,using its 12-month structural health monitoring data.TA2-MATD3 was compared with existing damage detection methods in terms of the convergence ratio,online computing efficiency,and damage detection accuracy.The results show that the computational efficiency of TA2-MATD3 is approximately 117–160 times that of the traditional methods.The convergence ratio of damage detection on the training set is approximately 97%,and that on the test set is in the range of 86.2%–91.9%.In addition,the main apparent damages found in the field survey were identified by TA2-MATD3.The results indicate that the proposed method can significantly improve the online computing efficiency and damage detection accuracy.This research can provide novel perspectives for the use of reinforcement learning methods to conduct damage detection in online structural health monitoring. 展开更多
关键词 Reinforcement learning Theory-assisted Damage detection Newton’s method Model updating Architectural heritage
在线阅读 下载PDF
Task-based Teaching and Learning in English Listening Class
7
作者 鲍蓉芳 《科技信息》 2008年第17期226-227,241,共3页
In technical college English listening class,task-based teaching and learning method can not only create harmonious environment for students' learning,but also motivate students' enthusiasm in listening class,... In technical college English listening class,task-based teaching and learning method can not only create harmonious environment for students' learning,but also motivate students' enthusiasm in listening class,thus students can benefit a great deal in listening class and the listening can be carried out successfully. 展开更多
关键词 高校 英语 教学方法 听写能力
在线阅读 下载PDF
Autonomous and Incidental Learning of Vocabulary Through Task-based Language Teaching
8
作者 覃成海 《海外英语》 2017年第6期221-222,共2页
This study mainly discussed the effects of three tasks of translating authentic business report on L2 vocabulary learning.160 students were chosen from different majors by a pre-task proficiency test.The findings reve... This study mainly discussed the effects of three tasks of translating authentic business report on L2 vocabulary learning.160 students were chosen from different majors by a pre-task proficiency test.The findings revealed that task 3 was the optimum task in vo-cabulary gain and direct vocabulary learning had a more facilitated power than incidental vocabulary learning in this translation task forthe learners with the lowest level of vocabulary.This study also suggested that the caution of need and evaluation needed to be adjustedand paid for the learners with the lowest vocabulary level. 展开更多
关键词 Vocabulary learning involvement load hypothesis task-based language teaching
在线阅读 下载PDF
Consideration on Vocational Students' Participation in Class:With Special Reference to Task-based Approach and Traditional Method
9
作者 董晓霞 《海外英语》 2011年第4X期111-113,共3页
This paper covers an experimental study on vocational students' participation in the class under task-based approach and traditional grammar-translation teaching method.As vocational colleges in China have develop... This paper covers an experimental study on vocational students' participation in the class under task-based approach and traditional grammar-translation teaching method.As vocational colleges in China have developed rapidly,teachers are still exploring and experimenting with different teaching approaches in order to find the suitable one(s).Using the theory of task-based approach,the grammartranslation method,the author conducts an experiment by recording.The following key issue has been addressed and some conclusive results have been made.The paper intends to find an answer to the following question:Under which teaching approach can students participate in the class more actively? Based on the above research work,the following result has been reached:Task-based approach can promote students' mastering of vocabulary and grammar,and students participate in the class more actively.The author would like to share her experiences with others in pedagogical studies of teaching vocational college students English. 展开更多
关键词 task-based approach grammar-translation method students’participation
在线阅读 下载PDF
The State-of-the-Art Review on Applications of Intrusive Sensing,Image Processing Techniques,and Machine Learning Methods in Pavement Monitoring and Analysis 被引量:23
10
作者 Yue Hou Qiuhan Li +5 位作者 Chen Zhang Guoyang Lu Zhoujing Ye Yihan Chen Linbing Wang Dandan Cao 《Engineering》 SCIE EI 2021年第6期845-856,共12页
In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers a... In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers as they directly affect the regular service for the users.Therefore,monitoring the health status of pavement before irreversible damage occurs is essential for timely maintenance,which in turn ensures public transportation safety.Many pavement damages can be detected and analyzed by monitoring the structure dynamic responses and evaluating road surface conditions.Advanced technologies can be employed for the collection and analysis of such data,including various intrusive sensing techniques,image processing techniques,and machine learning methods.This review summarizes the state-ofthe-art of these three technologies in pavement engineering in recent years and suggests possible developments for future pavement monitoring and analysis based on these approaches. 展开更多
关键词 Pavement monitoring and analysis The state-of-the-art review Intrusive sensing Image processing techniques Machine learning methods
在线阅读 下载PDF
A liquid loading prediction method of gas pipeline based on machine learning 被引量:5
11
作者 Bing-Yuan Hong Sheng-Nan Liu +5 位作者 Xiao-Ping Li Di Fan Shuai-Peng Ji Si-Hang Chen Cui-Cui Li Jing Gong 《Petroleum Science》 SCIE CAS CSCD 2022年第6期3004-3015,共12页
The liquid loading is one of the most frequently encountered phenomena in the transportation of gas pipeline,reducing the transmission efficiency and threatening the flow assurance.However,most of the traditional mech... The liquid loading is one of the most frequently encountered phenomena in the transportation of gas pipeline,reducing the transmission efficiency and threatening the flow assurance.However,most of the traditional mechanism models are semi-empirical models,and have to be resolved under different working conditions with complex calculation process.The development of big data technology and artificial intelligence provides the possibility to establish data-driven models.This paper aims to establish a liquid loading prediction model for natural gas pipeline with high generalization ability based on machine learning.First,according to the characteristics of actual gas pipeline,a variety of reasonable combinations of working conditions such as different gas velocity,pipe diameters,water contents and outlet pressures were set,and multiple undulating pipeline topography with different elevation differences was established.Then a large number of simulations were performed by simulator OLGA to obtain the data required for machine learning.After data preprocessing,six supervised learning algorithms,including support vector machine(SVM),decision tree(DT),random forest(RF),artificial neural network(ANN),plain Bayesian classification(NBC),and K nearest neighbor algorithm(KNN),were compared to evaluate the performance of liquid loading prediction.Finally,the RF and KNN with better performance were selected for parameter tuning and then used to the actual pipeline for liquid loading location prediction.Compared with OLGA simulation,the established data-driven model not only improves calculation efficiency and reduces workload,but also can provide technical support for gas pipeline flow assurance. 展开更多
关键词 Liquid loading Data-driven method Machine learning Gas pipeline Multiphase flow
原文传递
Relevant experience learning:A deep reinforcement learning method for UAV autonomous motion planning in complex unknown environments 被引量:21
12
作者 Zijian HU Xiaoguang GAO +2 位作者 Kaifang WAN Yiwei ZHAI Qianglong WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2021年第12期187-204,共18页
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ... Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions. 展开更多
关键词 Autonomous Motion Planning(AMP) Deep Deterministic Policy Gradient(DDPG) Deep Reinforcement learning(DRL) Sampling method UAV
原文传递
A systematic machine learning method for reservoir identification and production prediction 被引量:4
13
作者 Wei Liu Zhangxin Chen +1 位作者 Yuan Hu Liuyang Xu 《Petroleum Science》 SCIE EI CAS CSCD 2023年第1期295-308,共14页
Reservoir identification and production prediction are two of the most important tasks in petroleum exploration and development.Machine learning(ML)methods are used for petroleum-related studies,but have not been appl... Reservoir identification and production prediction are two of the most important tasks in petroleum exploration and development.Machine learning(ML)methods are used for petroleum-related studies,but have not been applied to reservoir identification and production prediction based on reservoir identification.Production forecasting studies are typically based on overall reservoir thickness and lack accuracy when reservoirs contain a water or dry layer without oil production.In this paper,a systematic ML method was developed using classification models for reservoir identification,and regression models for production prediction.The production models are based on the reservoir identification results.To realize the reservoir identification,seven optimized ML methods were used:four typical single ML methods and three ensemble ML methods.These methods classify the reservoir into five types of layers:water,dry and three levels of oil(I oil layer,II oil layer,III oil layer).The validation and test results of these seven optimized ML methods suggest the three ensemble methods perform better than the four single ML methods in reservoir identification.The XGBoost produced the model with the highest accuracy;up to 99%.The effective thickness of I and II oil layers determined during the reservoir identification was fed into the models for predicting production.Effective thickness considers the distribution of the water and the oil resulting in a more reasonable production prediction compared to predictions based on the overall reservoir thickness.To validate the superiority of the ML methods,reference models using overall reservoir thickness were built for comparison.The models based on effective thickness outperformed the reference models in every evaluation metric.The prediction accuracy of the ML models using effective thickness were 10%higher than that of reference model.Without the personal error or data distortion existing in traditional methods,this novel system realizes rapid analysis of data while reducing the time required to resolve reservoir classification and production prediction challenges.The ML models using the effective thickness obtained from reservoir identification were more accurate when predicting oil production compared to previous studies which use overall reservoir thickness. 展开更多
关键词 Reservoir identification Production prediction Machine learning Ensemble method
原文传递
Soliton, breather, and rogue wave solutions for solving the nonlinear Schrodinger equation using a deep learning method with physical constraints 被引量:6
14
作者 Jun-Cai Pu Jun Li Yong Chen 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第6期77-87,共11页
The nonlinear Schrodinger equation is a classical integrable equation which contains plenty of significant properties and occurs in many physical areas.However,due to the difficulty of solving this equation,in particu... The nonlinear Schrodinger equation is a classical integrable equation which contains plenty of significant properties and occurs in many physical areas.However,due to the difficulty of solving this equation,in particular in high dimensions,lots of methods are proposed to effectively obtain different kinds of solutions,such as neural networks among others.Recently,a method where some underlying physical laws are embeded into a conventional neural network is proposed to uncover the equation’s dynamical behaviors from spatiotemporal data directly.Compared with traditional neural networks,this method can obtain remarkably accurate solution with extraordinarily less data.Meanwhile,this method also provides a better physical explanation and generalization.In this paper,based on the above method,we present an improved deep learning method to recover the soliton solutions,breather solution,and rogue wave solutions of the nonlinear Schrodinger equation.In particular,the dynamical behaviors and error analysis about the one-order and two-order rogue waves of nonlinear integrable equations are revealed by the deep neural network with physical constraints for the first time.Moreover,the effects of different numbers of initial points sampled,collocation points sampled,network layers,neurons per hidden layer on the one-order rogue wave dynamics of this equation have been considered with the help of the control variable way under the same initial and boundary conditions.Numerical experiments show that the dynamical behaviors of soliton solutions,breather solution,and rogue wave solutions of the integrable nonlinear Schrodinger equation can be well reconstructed by utilizing this physically-constrained deep learning method. 展开更多
关键词 deep learning method neural network soliton solutions breather solution rogue wave solutions
原文传递
A mesh optimization method using machine learning technique and variational mesh adaptation 被引量:5
15
作者 Tingfan WU Xuejun LIU +2 位作者 Wei AN Zenghui HUANG Hongqiang LYU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第3期27-41,共15页
Computational mesh is an important ingredient that affects the accuracy and efficiency of CFD numerical simulation.In light of the introduced large amount of computational costs for many adaptive mesh methods,moving m... Computational mesh is an important ingredient that affects the accuracy and efficiency of CFD numerical simulation.In light of the introduced large amount of computational costs for many adaptive mesh methods,moving mesh methods keep the number of nodes and topology of a mesh unchanged and do not increase CFD computational expense.As the state-of-the-art moving mesh method,the variational mesh adaptation approach has been introduced to CFD calculation.However,quickly estimating the flow field on the updated meshes during the iterative algorithm is challenging.A mesh optimization method,which embeds a machine learning regression model into the variational mesh adaptation,is proposed.The regression model captures the mapping between the initial mesh nodes and the flow field,so that the variational method could move mesh nodes iteratively by solving the mesh functional which is built from the estimated flow field on the updated mesh via the regression model.After the optimization,the density of the nodes in the high gradient area increases while the density in the low gradient area decreases.Benchmark examples are first used to verify the feasibility and effectiveness of the proposed method.And then we use the steady subsonic and transonic flows over cylinder and NACA0012 airfoil on unstructured triangular meshes to test our method.Results show that the proposed method significantly improves the accuracy of the local flow features on the adaptive meshes.Our work indicates that the proposed mesh optimization approach is promising for improving the accuracy and efficiency of CFD computation. 展开更多
关键词 CFD Flow field Machine learning Moving mesh method Regression models Variational mesh adaptation
原文传递
Accurate machine learning models based on small dataset of energetic materials through spatial matrix featurization methods 被引量:8
16
作者 Chao Chen Danyang Liu +4 位作者 Siyan Deng Lixiang Zhong Serene Hay Yee Chan Shuzhou Li Huey Hoon Hng 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2021年第12期364-375,I0009,共13页
A large database is desired for machine learning(ML) technology to make accurate predictions of materials physicochemical properties based on their molecular structure.When a large database is not available,the develo... A large database is desired for machine learning(ML) technology to make accurate predictions of materials physicochemical properties based on their molecular structure.When a large database is not available,the development of proper featurization method based on physicochemical nature of target proprieties can improve the predictive power of ML models with a smaller database.In this work,we show that two new featurization methods,volume occupation spatial matrix and heat contribution spatial matrix,can improve the accuracy in predicting energetic materials' crystal density(ρ_(crystal)) and solid phase enthalpy of formation(H_(f,solid)) using a database containing 451 energetic molecules.Their mean absolute errors are reduced from 0.048 g/cm~3 and 24.67 kcal/mol to 0.035 g/cm~3 and 9.66 kcal/mol,respectively.By leave-one-out-cross-validation,the newly developed ML models can be used to determine the performance of most kinds of energetic materials except cubanes.Our ML models are applied to predict ρ_(crystal) and H_(f,solid) of CHON-based molecules of the 150 million sized PubChem database,and screened out 56 candidates with competitive detonation performance and reasonable chemical structures.With further improvement in future,spatial matrices have the potential of becoming multifunctional ML simulation tools that could provide even better predictions in wider fields of materials science. 展开更多
关键词 Small database machine learning Energetic materials screening Spatial matrix featurization method Crystal density Formation enthalpy n-Body interactions
在线阅读 下载PDF
Bayesian machine learning-based method for prediction of slope failure time 被引量:7
17
作者 Jie Zhang Zipeng Wang +2 位作者 Jinzheng Hu Shihao Xiao Wenyu Shang 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第4期1188-1199,共12页
The data-driven phenomenological models based on deformation measurements have been widely utilized to predict the slope failure time(SFT).The observational and model uncertainties could lead the predicted SFT calcula... The data-driven phenomenological models based on deformation measurements have been widely utilized to predict the slope failure time(SFT).The observational and model uncertainties could lead the predicted SFT calculated from the phenomenological models to deviate from the actual SFT.Currently,very limited study has been conducted on how to evaluate the effect of such uncertainties on SFT prediction.In this paper,a comprehensive slope failure database was compiled.A Bayesian machine learning(BML)-based method was developed to learn the model and observational uncertainties involved in SFT prediction,through which the probabilistic distribution of the SFT can be obtained.This method was illustrated in detail with an example.Verification studies show that the BML-based method is superior to the traditional inverse velocity method(INVM)and the maximum likelihood method for predicting SFT.The proposed method in this study provides an effective tool for SFT prediction. 展开更多
关键词 Slope failure time(SFT) Bayesian machine learning(BML) Inverse velocity method(INVM)
在线阅读 下载PDF
Real-time determination of sandy soil stiffness during vibratory compaction incorporating machine learning method for intelligent compaction 被引量:3
18
作者 Zhengheng Xu Hadi Khabbaz +1 位作者 Behzad Fatahi Di Wu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第5期1609-1625,共17页
An emerging real-time ground compaction and quality control, known as intelligent compaction(IC), has been applied for efficiently optimising the full-area compaction. Although IC technology can provide real-time asse... An emerging real-time ground compaction and quality control, known as intelligent compaction(IC), has been applied for efficiently optimising the full-area compaction. Although IC technology can provide real-time assessment of uniformity of the compacted area, accurate determination of the soil stiffness required for quality control and design remains challenging. In this paper, a novel and advanced numerical model simulating the interaction of vibratory drum and soil beneath is developed. The model is capable of evaluating the nonlinear behaviour of underlying soil subjected to dynamic loading by capturing the variations of damping with the cyclic shear strains and degradation of soil modulus. The interaction of the drum and the soil is simulated via the finite element method to develop a comprehensive dataset capturing the dynamic responses of the drum and the soil. Indeed, more than a thousand three-dimensional(3D) numerical models covering various soil characteristics, roller weights, vibration amplitudes and frequencies were adopted. The developed dataset is then used to train the inverse solver using an innovative machine learning approach, i.e. the extended support vector regression, to simulate the stiffness of the compacted soil by adopting drum acceleration records. Furthermore, the impacts of the amplitude and frequency of the vibration on the level of underlying soil compaction are discussed.The proposed machine learning approach is promising for real-time extraction of actual soil stiffness during compaction. Results of the study can be employed by practising engineers to interpret roller drum acceleration data to estimate the level of compaction and ground stiffness during compaction. 展开更多
关键词 Intelligent compaction Machine learning method Finite element modelling Acceleration response
在线阅读 下载PDF
Machine Learning Enhanced Boundary Element Method:Prediction of Gaussian Quadrature Points 被引量:2
19
作者 Ruhui Cheng Xiaomeng Yin Leilei Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期445-464,共20页
This paper applies a machine learning technique to find a general and efficient numerical integration scheme for boundary element methods.A model based on the neural network multi-classification algorithmis constructe... This paper applies a machine learning technique to find a general and efficient numerical integration scheme for boundary element methods.A model based on the neural network multi-classification algorithmis constructed to find the minimum number of Gaussian quadrature points satisfying the given accuracy.The constructed model is trained by using a large amount of data calculated in the traditional boundary element method and the optimal network architecture is selected.The two-dimensional potential problem of a circular structure is tested and analyzed based on the determined model,and the accuracy of the model is about 90%.Finally,by incorporating the predicted Gaussian quadrature points into the boundary element analysis,we find that the numerical solution and the analytical solution are in good agreement,which verifies the robustness of the proposed method. 展开更多
关键词 Machine learning Boundary element method Gaussian quadrature points classification problems
在线阅读 下载PDF
Machine Learning to Instruct Single Crystal Growth by Flux Method 被引量:1
20
作者 Tang-Shi Yao Cen-Yao Tang +11 位作者 Meng Yang Ke-Jia Zhu Da-Yu Yan Chang-Jiang Yi Zi-Li Feng He-Chang Lei Cheng-He Li Le Wang Lei Wang You-Guo Shi Yu-Jie Sun Hong Ding 《Chinese Physics Letters》 SCIE CAS CSCD 2019年第6期98-102,共5页
Growth of high-quality single crystals is of great significance for research of condensed matter physics. The exploration of suitable growing conditions for single crystals is expensive and time-consuming, especially ... Growth of high-quality single crystals is of great significance for research of condensed matter physics. The exploration of suitable growing conditions for single crystals is expensive and time-consuming, especially for ternary compounds because of the lack of ternary phase diagram. Here we use machine learning(ML) trained on our experimental data to predict and instruct the growth. Four kinds of ML methods, including support vector machine(SVM), decision tree, random forest and gradient boosting decision tree, are adopted. The SVM method is relatively stable and works well, with an accuracy of 81% in predicting experimental results. By comparison,the accuracy of laboratory reaches 36%. The decision tree model is also used to reveal which features will take critical roles in growing processes. 展开更多
关键词 MACHINE learning Instruct Single CRYSTAL GROWTH FLUX method
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部