Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluatin...Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.展开更多
Variable Cycle Engine(VCE)serves as the core system in achieving future advanced fighters with cross-generational performance and mission versatility.However,the resultant complex configuration and strong coupling of ...Variable Cycle Engine(VCE)serves as the core system in achieving future advanced fighters with cross-generational performance and mission versatility.However,the resultant complex configuration and strong coupling of control parameters present significant challenges in designing acceleration and deceleration control schedules.To thoroughly explore the performance potential of engine,a global integration design method for acceleration and deceleration control schedule based on inner and outer loop optimization is proposed.The outer loop optimization module employs Integrated Surrogate-Assisted Co-Differential Evolutionary(ISACDE)algorithm to optimize the variable geometry adjustment laws based on B-spline curve,and the inner loop optimization module adopts the fixed-state method to design the open-loop fuel–air ratio control schedules,which are aimed at minimizing the acceleration and deceleration time under multiple constraints.Simulation results demonstrate that the proposed global integration design method not only furthest shortens the acceleration and deceleration time,but also effectively safeguards the engine from overlimit.展开更多
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of...Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.展开更多
Measurement-while-drilling(MWD)and guidance technologies have been extensively deployed in the exploitation of oil,natural gas,and other energy resources.Conventional control approaches are plagued by challenges,inclu...Measurement-while-drilling(MWD)and guidance technologies have been extensively deployed in the exploitation of oil,natural gas,and other energy resources.Conventional control approaches are plagued by challenges,including limited anti-interference capabilities and the insufficient generalization of decision-making experience.To address the intricate problem of directional well trajectory control,an intelligent algorithm design framework grounded in the high-level interaction mechanism between geology and engineering is put forward.This framework aims to facilitate the rapid batch migration and update of drilling strategies.The proposed directional well trajectory control method comprehensively considers the multi-source heterogeneous attributes of drilling experience data,leverages the generative simulation of the geological drilling environment,and promptly constructs a directional well trajectory control model with self-adaptive capabilities to environmental variations.This construction is carried out based on three hierarchical levels:“offline pre-drilling learning,online during-drilling interaction,and post-drilling model transfer”.Simulation results indicate that the guidance model derived from this method demonstrates remarkable generalization performance and accuracy.It can significantly boost the adaptability of the control algorithm to diverse environments and enhance the penetration rate of the target reservoir during drilling operations.展开更多
The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock mater...The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock materials.In this study,we present a novel approach that introduces a 3D numerical manifold method(3D-NMM)with a geometric kernel to enhance computational efficiency.Specifically,the maximum tensile stress criterion is adopted as a crack growth criterion to achieve strong discontinuous crack growth,and a local crack tracking algorithm and an angle correction technique are incorporated to address minor limitations of the algorithm in a 3D model.The implementation of the program is carried out in Python,using object-oriented programming in two independent modules:a calculation module and a crack module.Furthermore,we propose feasible improvements to enhance the performance of the algorithm.Finally,we demonstrate the feasibility and effectiveness of the enhanced algorithm in the 3D-NMM using four numerical examples.This study establishes the potential of the 3DNMM,combined with the local tracking algorithm,for accurately modeling 3D crack propagation in brittle rock materials.展开更多
To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise mode...To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.展开更多
This paper aims to investigate a dam break in a channel with a bend in the presence of several obstacles.To accurately determine the flood zones,it is necessary to take into account many factors such as terrain,reserv...This paper aims to investigate a dam break in a channel with a bend in the presence of several obstacles.To accurately determine the flood zones,it is necessary to take into account many factors such as terrain,reservoir volume.Numerical modeling was used to determine the flood zone.Numerical modeling based on the Navier-Stokes equations with a turbulent k-epsilon RNG model,the Volume of Fluid(VOF)method and the PISO algorithm were used to analyze the flow in a bend channel at an angle of 10 with the obstacles.To verify the numerical model,a test on dam break in the 450 channel was conducted.The simulation results were compared with experimental data and with the numerical data of existing data.Having been convinced of the correctness of the mathematical model,the authors carried out a numerical simulation of the main problem in three versions:without barriers,with one obstacle,with two obstacles.According to the obtained numerical results,it can be noted that irregular landforms held the flow,a decrease in water level and a slower time for water emergence could be seen.Thus,the water flow without an obstacle,with one obstacle and with two obstacles showed 4.2 s,4.4 s and 4.6 s of the time of water appearance,respectively.This time shift can give a certain advantage when conducting various events to evacuate people.展开更多
Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method bas...Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method based upon the BP neural network algorithm and three-dimensional finite element method(FEM)was developed.The FEM simulation results were compared with seismic records and the results show that the PGA and response spectra have a tendency to increase with increasing elevation,but the correlation between PGA amplification factors and slope is not obvious for low hills.New BP neural network models were established for the prediction of amplification factors of PGA and response spectra.Two kinds of input variables’combinations which are convenient to achieve are proposed in this paper for the prediction of amplification factors of PGA and response spectra,respectively.The absolute values of prediction errors can be mostly within 0.1 for PGA amplification factors,and they can be mostly within 0.2 for response spectra’s amplification factors.One input variables’combination can achieve better prediction performance while the other one has better expandability of the predictive region.Particularly,the BP models only employ one hidden layer with about a hundred nodes,which makes it efficient for training.展开更多
Motivated by the study of regularization for sparse problems,we propose a new regularization method for sparse vector recovery.We derive sufficient conditions on the well-posedness of the new regularization,and design...Motivated by the study of regularization for sparse problems,we propose a new regularization method for sparse vector recovery.We derive sufficient conditions on the well-posedness of the new regularization,and design an iterative algorithm,namely the iteratively reweighted algorithm(IR-algorithm),for efficiently computing the sparse solutions to the proposed regularization model.The convergence of the IR-algorithm and the setting of the regularization parameters are analyzed at length.Finally,we present numerical examples to illustrate the features of the new regularization and algorithm.展开更多
The grid-based multi-velocity field technique has become increasingly popular for simulating the Material Point Method(MPM)in contact problems.However,this traditional technique has some shortcomings,such as(1)early c...The grid-based multi-velocity field technique has become increasingly popular for simulating the Material Point Method(MPM)in contact problems.However,this traditional technique has some shortcomings,such as(1)early contact and contact penetration can occur when the contact conditions are unsuitable,and(2)the method is not available for contact problems involving rigid-nonrigid materials,which can cause numerical instability.This study presents a new hybrid contact approach for the MPM to address these limitations to simulate the soil and structure interactions.The approach combines the advantages of point-point and point-segment contacts to implement contact detection,satisfying the impenetrability condition and smoothing the corner contact problem.The proposed approach is first validated through a disk test on an inclined slope.Then,several typical cases,such as granular collapse,bearing capacity,and deformation of a flexible retaining wall,are simulated to demonstrate the robustness of the proposed approach compared with FEM or analytical solutions.Finally,the proposed method is used to simulate the impact of sand flow on a deformable structure.The results show that the proposed contact approach can well describe the phenomenon of soil-structure interaction problems.展开更多
Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times beca...Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times because they are easily affected by tides,currents,and other factors in the shallow sea environment during long-term acquisition.If uncorrected,then the imaging quality of subsequent processing will be affected.The conventional secondary positioning does not consider the case of multiple movements of the receivers,and the accuracy of secondary positioning is insufficient.The first arrival wave of OBN seismic data in shallow ocean mainly comprises refracted waves.In this study,a nonlinear model is established in accordance with the propagation mechanism of a refracted wave and its relationship with the time interval curve to realize the accurate location of multiple receiver movements.In addition,the Levenberg-Marquart algorithm is used to reduce the influence of the first arrival pickup error and to automatically detect the receiver movements,identifying the accurate dynamic relocation of the receivers.The simulation and field data show that the proposed method can realize the dynamic location of multiple receiver movements,thereby improving the accuracy of seismic imaging and achieving high practical value.展开更多
In this paper, an absorbing Fictitious Boundary Condition (FBC) is presented to generate an iterative Domain Decomposition Method (DDM) for analyzing waveguide problems.The relaxed algorithm is introduced to improve t...In this paper, an absorbing Fictitious Boundary Condition (FBC) is presented to generate an iterative Domain Decomposition Method (DDM) for analyzing waveguide problems.The relaxed algorithm is introduced to improve the iterative convergence. And the matrix equations are solved using the multifrontal algorithm. The resulting CPU time is greatly reduced.Finally, a number of numerical examples are given to illustrate its accuracy and efficiency.展开更多
This study sets up two new merit functions,which are minimized for the detection of real eigenvalue and complex eigenvalue to address nonlinear eigenvalue problems.For each eigen-parameter the vector variable is solve...This study sets up two new merit functions,which are minimized for the detection of real eigenvalue and complex eigenvalue to address nonlinear eigenvalue problems.For each eigen-parameter the vector variable is solved from a nonhomogeneous linear system obtained by reducing the number of eigen-equation one less,where one of the nonzero components of the eigenvector is normalized to the unit and moves the column containing that component to the right-hand side as a nonzero input vector.1D and 2D golden section search algorithms are employed to minimize the merit functions to locate real and complex eigenvalues.Simultaneously,the real and complex eigenvectors can be computed very accurately.A simpler approach to the nonlinear eigenvalue problems is proposed,which implements a normalization condition for the uniqueness of the eigenvector into the eigenequation directly.The real eigenvalues can be computed by the fictitious time integration method(FTIM),which saves computational costs compared to the one-dimensional golden section search algorithm(1D GSSA).The simpler method is also combined with the Newton iterationmethod,which is convergent very fast.All the proposed methods are easily programmed to compute the eigenvalue and eigenvector with high accuracy and efficiency.展开更多
The electromagnetic detection satellite (EDS) is a type of earth observation satellites (EOSs). The Information collected by EDSs plays an important role in some fields, such as industry, science and military. The...The electromagnetic detection satellite (EDS) is a type of earth observation satellites (EOSs). The Information collected by EDSs plays an important role in some fields, such as industry, science and military. The scheduling of EDSs is a complex combinatorial optimization problem. Current research mainly focuses on the scheduling of imaging satellites and SAR satellites, but little work has been done on the scheduling of EDSs for its specific characteristics. A multi-satellite scheduling model is established, in which the specific constrains of EDSs are considered, then a scheduling algorithm based on the genetic algorithm (GA) is proposed. To deal with the specific constrains of EDSs, a penalty function method is introduced. However, it is hard to determine the appropriate penalty coefficient in the penalty function. Therefore, an adaptive adjustment mechanism of the penalty coefficient is designed to solve the problem, as well as improve the scheduling results. Experimental results are used to demonstrate the correctness and practicability of the proposed scheduling algorithm.展开更多
This research introduces a novel approach to enhancing bucket elevator design and operation through the integration of discrete element method(DEM)simulation,design of experiments(DOE),and metaheuristic optimization a...This research introduces a novel approach to enhancing bucket elevator design and operation through the integration of discrete element method(DEM)simulation,design of experiments(DOE),and metaheuristic optimization algorithms.Specifically,the study employs the firefly algorithm(FA),a metaheuristic optimization technique,to optimize bucket elevator parameters for maximizing transport mass and mass flow rate discharge of granular materials under specified working conditions.The experimental methodology involves several key steps:screening experiments to identify significant factors affecting bucket elevator operation,central composite design(CCD)experiments to further explore these factors,and response surface methodology(RSM)to create predictive models for transport mass and mass flow rate discharge.The FA algorithm is then applied to optimize these models,and the results are validated through simulation and empirical experiments.The study validates the optimized parameters through simulation and empirical experiments,comparing results with DEM simulation.The outcomes demonstrate the effectiveness of the FA algorithm in identifying optimal bucket parameters,showcasing less than 10%and 15%deviation for transport mass and mass flow rate discharge,respectively,between predicted and actual values.Overall,this research provides insights into the critical factors influencing bucket elevator operation and offers a systematic methodology for optimizing bucket parameters,contributing to more efficient material handling in various industrial applications.展开更多
In this work, approximate analytical solutions to the lid-driven square cavity flow problem, which satisfied two-dimensional unsteady incompressible Navier-Stokes equations, are presented using the kinetically reduced...In this work, approximate analytical solutions to the lid-driven square cavity flow problem, which satisfied two-dimensional unsteady incompressible Navier-Stokes equations, are presented using the kinetically reduced local Navier-Stokes equations. Reduced differential transform method and perturbation-iteration algorithm are applied to solve this problem. The convergence analysis was discussed for both methods. The numerical results of both methods are given at some Reynolds numbers and low Mach numbers, and compared with results of earlier studies in the review of the literatures. These two methods are easy and fast to implement, and the results are close to each other and other numerical results, so it can be said that these methods are useful in finding approximate analytical solutions to the unsteady incompressible flow problems at low Mach numbers.展开更多
The exploration of urban underground spaces is of great significance to urban planning,geological disaster prevention,resource exploration and environmental monitoring.However,due to the existing of severe interferenc...The exploration of urban underground spaces is of great significance to urban planning,geological disaster prevention,resource exploration and environmental monitoring.However,due to the existing of severe interferences,conventional seismic methods cannot adapt to the complex urban environment well.Since adopting the single-node data acquisition method and taking the seismic ambient noise as the signal,the microtremor horizontal-to-vertical spectral ratio(HVSR)method can effectively avoid the strong interference problems caused by the complex urban environment,which could obtain information such as S-wave velocity and thickness of underground formations by fitting the microtremor HVSR curve.Nevertheless,HVSR curve inversion is a multi-parameter curve fitting process.And conventional inversion methods can easily converge to the local minimum,which will directly affect the reliability of the inversion results.Thus,the authors propose a HVSR inversion method based on the multimodal forest optimization algorithm,which uses the efficient clustering technique and locates the global optimum quickly.Tests on synthetic data show that the inversion results of the proposed method are consistent with the forward model.Both the adaption and stability to the abnormal layer velocity model are demonstrated.The results of the real field data are also verified by the drilling information.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
When soldering electronic components onto circuit boards,the temperature curves of the reflow ovens across different zones and the conveyor belt speed significantly influence the product quality.This study focuses on ...When soldering electronic components onto circuit boards,the temperature curves of the reflow ovens across different zones and the conveyor belt speed significantly influence the product quality.This study focuses on optimizing the furnace temperature curve under varying settings of reflow oven zone temperatures and conveyor belt speeds.To address this,the research sequentially develops a heat transfer model for reflow soldering,an optimization model for reflow furnace conditions using the differential evolution algorithm,and an evaluation and decision model combining the differential evolution algorithm with the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)method.This approach aims to determine the optimal furnace temperature curve,zone temperatures of the reflow oven,and the conveyor belt speed.展开更多
In this paper, a parallel algorithm with iterative form for solving finite element equation is presented. Based on the iterative solution of linear algebra equations, the parallel computational steps are introduced in...In this paper, a parallel algorithm with iterative form for solving finite element equation is presented. Based on the iterative solution of linear algebra equations, the parallel computational steps are introduced in this method. Also by using the weighted residual method and choosing the appropriate weighting functions, the finite element basic form of parallel algorithm is deduced. The program of this algorithm has been realized on the ELXSI-6400 parallel computer of Xi'an Jiaotong University. The computational results show the operational speed will be raised and the CPU time will be cut down effectively. So this method is one kind of effective parallel algorithm for solving the finite element equations of large-scale structures.展开更多
基金funded by the General Program of the National Natural Science Foundation of China grant number[62277022].
文摘Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.
基金supported by the Basic Research on Dynamic Real-time Modeling and Onboard Adaptive Modeling of Aero Engine,China(No.QZPY202308)。
文摘Variable Cycle Engine(VCE)serves as the core system in achieving future advanced fighters with cross-generational performance and mission versatility.However,the resultant complex configuration and strong coupling of control parameters present significant challenges in designing acceleration and deceleration control schedules.To thoroughly explore the performance potential of engine,a global integration design method for acceleration and deceleration control schedule based on inner and outer loop optimization is proposed.The outer loop optimization module employs Integrated Surrogate-Assisted Co-Differential Evolutionary(ISACDE)algorithm to optimize the variable geometry adjustment laws based on B-spline curve,and the inner loop optimization module adopts the fixed-state method to design the open-loop fuel–air ratio control schedules,which are aimed at minimizing the acceleration and deceleration time under multiple constraints.Simulation results demonstrate that the proposed global integration design method not only furthest shortens the acceleration and deceleration time,but also effectively safeguards the engine from overlimit.
文摘Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.
基金supported by the National Key R&D Program of China(No.2019YFA0708304)the CNPC Innovation Fund(No.2022DQ02-0609)the Scientific research and technology development Project of CNPC(No.2022DJ4507).
文摘Measurement-while-drilling(MWD)and guidance technologies have been extensively deployed in the exploitation of oil,natural gas,and other energy resources.Conventional control approaches are plagued by challenges,including limited anti-interference capabilities and the insufficient generalization of decision-making experience.To address the intricate problem of directional well trajectory control,an intelligent algorithm design framework grounded in the high-level interaction mechanism between geology and engineering is put forward.This framework aims to facilitate the rapid batch migration and update of drilling strategies.The proposed directional well trajectory control method comprehensively considers the multi-source heterogeneous attributes of drilling experience data,leverages the generative simulation of the geological drilling environment,and promptly constructs a directional well trajectory control model with self-adaptive capabilities to environmental variations.This construction is carried out based on three hierarchical levels:“offline pre-drilling learning,online during-drilling interaction,and post-drilling model transfer”.Simulation results indicate that the guidance model derived from this method demonstrates remarkable generalization performance and accuracy.It can significantly boost the adaptability of the control algorithm to diverse environments and enhance the penetration rate of the target reservoir during drilling operations.
基金supported by the National Natural Science Foundation of China(Grant Nos.42172312 and 52211540395)support from the Institut Universitaire de France(IUF).
文摘The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock materials.In this study,we present a novel approach that introduces a 3D numerical manifold method(3D-NMM)with a geometric kernel to enhance computational efficiency.Specifically,the maximum tensile stress criterion is adopted as a crack growth criterion to achieve strong discontinuous crack growth,and a local crack tracking algorithm and an angle correction technique are incorporated to address minor limitations of the algorithm in a 3D model.The implementation of the program is carried out in Python,using object-oriented programming in two independent modules:a calculation module and a crack module.Furthermore,we propose feasible improvements to enhance the performance of the algorithm.Finally,we demonstrate the feasibility and effectiveness of the enhanced algorithm in the 3D-NMM using four numerical examples.This study establishes the potential of the 3DNMM,combined with the local tracking algorithm,for accurately modeling 3D crack propagation in brittle rock materials.
基金co-supported by the National Natural Science Foundation of China(Nos.52405293,52375237)China Postdoctoral Science Foundation(No.2024M754219)Shaanxi Province Postdoctoral Research Project Funding,China。
文摘To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.
基金supported by the grant from the Ministry of science and Higher education of the Republic of Kazakhstan(AP23489948).
文摘This paper aims to investigate a dam break in a channel with a bend in the presence of several obstacles.To accurately determine the flood zones,it is necessary to take into account many factors such as terrain,reservoir volume.Numerical modeling was used to determine the flood zone.Numerical modeling based on the Navier-Stokes equations with a turbulent k-epsilon RNG model,the Volume of Fluid(VOF)method and the PISO algorithm were used to analyze the flow in a bend channel at an angle of 10 with the obstacles.To verify the numerical model,a test on dam break in the 450 channel was conducted.The simulation results were compared with experimental data and with the numerical data of existing data.Having been convinced of the correctness of the mathematical model,the authors carried out a numerical simulation of the main problem in three versions:without barriers,with one obstacle,with two obstacles.According to the obtained numerical results,it can be noted that irregular landforms held the flow,a decrease in water level and a slower time for water emergence could be seen.Thus,the water flow without an obstacle,with one obstacle and with two obstacles showed 4.2 s,4.4 s and 4.6 s of the time of water appearance,respectively.This time shift can give a certain advantage when conducting various events to evacuate people.
基金supported by the National Natural Science Foundation of China(No.51878625)the Collaboratory for the Study of Earthquake Predictability in China Seismic Experimental Site(No.2018YFE0109700)the General Scientific Research Foundation of Shandong Earthquake Agency(No.YB2208).
文摘Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method based upon the BP neural network algorithm and three-dimensional finite element method(FEM)was developed.The FEM simulation results were compared with seismic records and the results show that the PGA and response spectra have a tendency to increase with increasing elevation,but the correlation between PGA amplification factors and slope is not obvious for low hills.New BP neural network models were established for the prediction of amplification factors of PGA and response spectra.Two kinds of input variables’combinations which are convenient to achieve are proposed in this paper for the prediction of amplification factors of PGA and response spectra,respectively.The absolute values of prediction errors can be mostly within 0.1 for PGA amplification factors,and they can be mostly within 0.2 for response spectra’s amplification factors.One input variables’combination can achieve better prediction performance while the other one has better expandability of the predictive region.Particularly,the BP models only employ one hidden layer with about a hundred nodes,which makes it efficient for training.
基金Project supported by the National Natural Science Foundation of China(No.61603322)the Research Foundation of Education Bureau of Hunan Province of China(No.16C1542)
文摘Motivated by the study of regularization for sparse problems,we propose a new regularization method for sparse vector recovery.We derive sufficient conditions on the well-posedness of the new regularization,and design an iterative algorithm,namely the iteratively reweighted algorithm(IR-algorithm),for efficiently computing the sparse solutions to the proposed regularization model.The convergence of the IR-algorithm and the setting of the regularization parameters are analyzed at length.Finally,we present numerical examples to illustrate the features of the new regularization and algorithm.
基金funding support from the National Nature Science Foundation of China(Grant No.52022060)the Key Laboratory of Impact and Safety Engineering(Ningbo University).
文摘The grid-based multi-velocity field technique has become increasingly popular for simulating the Material Point Method(MPM)in contact problems.However,this traditional technique has some shortcomings,such as(1)early contact and contact penetration can occur when the contact conditions are unsuitable,and(2)the method is not available for contact problems involving rigid-nonrigid materials,which can cause numerical instability.This study presents a new hybrid contact approach for the MPM to address these limitations to simulate the soil and structure interactions.The approach combines the advantages of point-point and point-segment contacts to implement contact detection,satisfying the impenetrability condition and smoothing the corner contact problem.The proposed approach is first validated through a disk test on an inclined slope.Then,several typical cases,such as granular collapse,bearing capacity,and deformation of a flexible retaining wall,are simulated to demonstrate the robustness of the proposed approach compared with FEM or analytical solutions.Finally,the proposed method is used to simulate the impact of sand flow on a deformable structure.The results show that the proposed contact approach can well describe the phenomenon of soil-structure interaction problems.
基金funded by the National Natural Science Foundation of China (No.42074140)the Scientific Research and Technology Development Project of China National Petroleum Corporation (No.2021ZG02)。
文摘Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times because they are easily affected by tides,currents,and other factors in the shallow sea environment during long-term acquisition.If uncorrected,then the imaging quality of subsequent processing will be affected.The conventional secondary positioning does not consider the case of multiple movements of the receivers,and the accuracy of secondary positioning is insufficient.The first arrival wave of OBN seismic data in shallow ocean mainly comprises refracted waves.In this study,a nonlinear model is established in accordance with the propagation mechanism of a refracted wave and its relationship with the time interval curve to realize the accurate location of multiple receiver movements.In addition,the Levenberg-Marquart algorithm is used to reduce the influence of the first arrival pickup error and to automatically detect the receiver movements,identifying the accurate dynamic relocation of the receivers.The simulation and field data show that the proposed method can realize the dynamic location of multiple receiver movements,thereby improving the accuracy of seismic imaging and achieving high practical value.
文摘In this paper, an absorbing Fictitious Boundary Condition (FBC) is presented to generate an iterative Domain Decomposition Method (DDM) for analyzing waveguide problems.The relaxed algorithm is introduced to improve the iterative convergence. And the matrix equations are solved using the multifrontal algorithm. The resulting CPU time is greatly reduced.Finally, a number of numerical examples are given to illustrate its accuracy and efficiency.
基金the National Science and Tech-nology Council,Taiwan for their financial support(Grant Number NSTC 111-2221-E-019-048).
文摘This study sets up two new merit functions,which are minimized for the detection of real eigenvalue and complex eigenvalue to address nonlinear eigenvalue problems.For each eigen-parameter the vector variable is solved from a nonhomogeneous linear system obtained by reducing the number of eigen-equation one less,where one of the nonzero components of the eigenvector is normalized to the unit and moves the column containing that component to the right-hand side as a nonzero input vector.1D and 2D golden section search algorithms are employed to minimize the merit functions to locate real and complex eigenvalues.Simultaneously,the real and complex eigenvectors can be computed very accurately.A simpler approach to the nonlinear eigenvalue problems is proposed,which implements a normalization condition for the uniqueness of the eigenvector into the eigenequation directly.The real eigenvalues can be computed by the fictitious time integration method(FTIM),which saves computational costs compared to the one-dimensional golden section search algorithm(1D GSSA).The simpler method is also combined with the Newton iterationmethod,which is convergent very fast.All the proposed methods are easily programmed to compute the eigenvalue and eigenvector with high accuracy and efficiency.
基金supported by the National Natural Science Foundation of China(6110118461174159)
文摘The electromagnetic detection satellite (EDS) is a type of earth observation satellites (EOSs). The Information collected by EDSs plays an important role in some fields, such as industry, science and military. The scheduling of EDSs is a complex combinatorial optimization problem. Current research mainly focuses on the scheduling of imaging satellites and SAR satellites, but little work has been done on the scheduling of EDSs for its specific characteristics. A multi-satellite scheduling model is established, in which the specific constrains of EDSs are considered, then a scheduling algorithm based on the genetic algorithm (GA) is proposed. To deal with the specific constrains of EDSs, a penalty function method is introduced. However, it is hard to determine the appropriate penalty coefficient in the penalty function. Therefore, an adaptive adjustment mechanism of the penalty coefficient is designed to solve the problem, as well as improve the scheduling results. Experimental results are used to demonstrate the correctness and practicability of the proposed scheduling algorithm.
基金This research was funded by the Faculty of Engineering,King Mongkut’s University of Technology North Bangkok.Contract No.ENG-NEW-66-39.
文摘This research introduces a novel approach to enhancing bucket elevator design and operation through the integration of discrete element method(DEM)simulation,design of experiments(DOE),and metaheuristic optimization algorithms.Specifically,the study employs the firefly algorithm(FA),a metaheuristic optimization technique,to optimize bucket elevator parameters for maximizing transport mass and mass flow rate discharge of granular materials under specified working conditions.The experimental methodology involves several key steps:screening experiments to identify significant factors affecting bucket elevator operation,central composite design(CCD)experiments to further explore these factors,and response surface methodology(RSM)to create predictive models for transport mass and mass flow rate discharge.The FA algorithm is then applied to optimize these models,and the results are validated through simulation and empirical experiments.The study validates the optimized parameters through simulation and empirical experiments,comparing results with DEM simulation.The outcomes demonstrate the effectiveness of the FA algorithm in identifying optimal bucket parameters,showcasing less than 10%and 15%deviation for transport mass and mass flow rate discharge,respectively,between predicted and actual values.Overall,this research provides insights into the critical factors influencing bucket elevator operation and offers a systematic methodology for optimizing bucket parameters,contributing to more efficient material handling in various industrial applications.
文摘In this work, approximate analytical solutions to the lid-driven square cavity flow problem, which satisfied two-dimensional unsteady incompressible Navier-Stokes equations, are presented using the kinetically reduced local Navier-Stokes equations. Reduced differential transform method and perturbation-iteration algorithm are applied to solve this problem. The convergence analysis was discussed for both methods. The numerical results of both methods are given at some Reynolds numbers and low Mach numbers, and compared with results of earlier studies in the review of the literatures. These two methods are easy and fast to implement, and the results are close to each other and other numerical results, so it can be said that these methods are useful in finding approximate analytical solutions to the unsteady incompressible flow problems at low Mach numbers.
基金Supported by projects of National Natural Science Foundation of China(No.42074150)National Key Research and Development Program of China(No.2023YFC3707901)Futian District Integrated Ground Collapse Monitoring and Early Warning System Construction Project(No.FTCG2023000209).
文摘The exploration of urban underground spaces is of great significance to urban planning,geological disaster prevention,resource exploration and environmental monitoring.However,due to the existing of severe interferences,conventional seismic methods cannot adapt to the complex urban environment well.Since adopting the single-node data acquisition method and taking the seismic ambient noise as the signal,the microtremor horizontal-to-vertical spectral ratio(HVSR)method can effectively avoid the strong interference problems caused by the complex urban environment,which could obtain information such as S-wave velocity and thickness of underground formations by fitting the microtremor HVSR curve.Nevertheless,HVSR curve inversion is a multi-parameter curve fitting process.And conventional inversion methods can easily converge to the local minimum,which will directly affect the reliability of the inversion results.Thus,the authors propose a HVSR inversion method based on the multimodal forest optimization algorithm,which uses the efficient clustering technique and locates the global optimum quickly.Tests on synthetic data show that the inversion results of the proposed method are consistent with the forward model.Both the adaption and stability to the abnormal layer velocity model are demonstrated.The results of the real field data are also verified by the drilling information.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘When soldering electronic components onto circuit boards,the temperature curves of the reflow ovens across different zones and the conveyor belt speed significantly influence the product quality.This study focuses on optimizing the furnace temperature curve under varying settings of reflow oven zone temperatures and conveyor belt speeds.To address this,the research sequentially develops a heat transfer model for reflow soldering,an optimization model for reflow furnace conditions using the differential evolution algorithm,and an evaluation and decision model combining the differential evolution algorithm with the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)method.This approach aims to determine the optimal furnace temperature curve,zone temperatures of the reflow oven,and the conveyor belt speed.
基金This work has been carried out as of a research project which has been supported by the National Structural Strength & Vibration Laboratory of Xi'an Jiaotong University with National Fund
文摘In this paper, a parallel algorithm with iterative form for solving finite element equation is presented. Based on the iterative solution of linear algebra equations, the parallel computational steps are introduced in this method. Also by using the weighted residual method and choosing the appropriate weighting functions, the finite element basic form of parallel algorithm is deduced. The program of this algorithm has been realized on the ELXSI-6400 parallel computer of Xi'an Jiaotong University. The computational results show the operational speed will be raised and the CPU time will be cut down effectively. So this method is one kind of effective parallel algorithm for solving the finite element equations of large-scale structures.