Without knowing the emittance value, it is difficult to optimize ion beam optics for minimum beam loss during transmission, especially considering the very high emittance values of electron cyclotron resonance(ECR) io...Without knowing the emittance value, it is difficult to optimize ion beam optics for minimum beam loss during transmission, especially considering the very high emittance values of electron cyclotron resonance(ECR) ion sources.With this in mind, to measure the emittance of the ion beams produced by the mVINIS ECR, which is part of the FAMA facility at the Vin?a Institute of Nuclear Sciences, we have developed a pepper-pot scintillator screen system combined with a CMOS camera. The application, developed on the Lab VIEW platform, allows us to control the camera's main attribute settings, such as the shutter speed and the gain, record the images in the region of interest, and process and filter the images in real time. To analyze the data from the obtained image, we have developed an algorithm called measurement and analysis of ion beam luminosity(MAIBL) to reconstruct the four-dimensional(4D) beam profile and calculate the root mean square(RMS) emittance. Before measuring emittance, we performed a simulated experiment using the pepper-pot simulation(PPS) program. An exported file(PPS) gives a numerically generated raw image(mock image) of a beam with a predefined emittance value after it has passed through a pepper-pot mask. By analyzing data from mock images instead of the image obtained by the camera and putting it into the MAIBL algorithm, we can compare the calculated emittance with PPS's initial emittance value. In this paper, we present our computational tools and explain the method for verifying the correctness of the calculated emittance values.展开更多
Seismic attributes encapsulate substantial reservoir characterization information and can effectively support reservoir prediction.Given the high-dimensional nonlinear between sandbodies and seismic attributes,this st...Seismic attributes encapsulate substantial reservoir characterization information and can effectively support reservoir prediction.Given the high-dimensional nonlinear between sandbodies and seismic attributes,this study employs the RFECV method for seismic attribute selection,inputting the optimized attributes into a LightGBM model to enhance spatial delineation of sandbody identification.By constructing training datasets based on optimized seismic attributes and well logs,followed by class imbalance correction as input variables for machine learning models,with sandbody probability as the output variable,and employing grid search to optimize model parameters,a high-precision sandbody prediction model was established.Taking the 3D seismic data of Block F3 in the North Sea of Holland as an example,this method successfully depicted the three-dimensional spatial distribution of target formation sandstones.The results indicate that even under strong noise conditions,the multi-attribute sandbody identification method based on LightGBM effectively characterizes the distribution features of sandbodies.Compared to unselected attributes,the prediction results using selected attributes have higher vertical resolution and inter-well conformity,with the prediction accuracy for single wells reaching 80.77%,significantly improving the accuracy of sandbody boundary delineation.展开更多
To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical c...To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical computation of such models.This efficient solver employs algorithms based on discrete cosine transformations(DCT)or discrete sine transformations(DST)and is not restricted by any spatio-temporal schemes.Our proposed methodology is appropriate for a variety of phase-field models and is especially efficient when combined with flow field systems.Meanwhile,this study has conducted an extensive numerical comparison and found that employing DCT and DST techniques not only yields results comparable to those obtained via the Multigrid(MG)method,a conventional approach used in the resolution of the Poisson equations,but also enhances computational efficiency by over 90%.展开更多
Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluatin...Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.展开更多
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of...Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.展开更多
The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock mater...The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock materials.In this study,we present a novel approach that introduces a 3D numerical manifold method(3D-NMM)with a geometric kernel to enhance computational efficiency.Specifically,the maximum tensile stress criterion is adopted as a crack growth criterion to achieve strong discontinuous crack growth,and a local crack tracking algorithm and an angle correction technique are incorporated to address minor limitations of the algorithm in a 3D model.The implementation of the program is carried out in Python,using object-oriented programming in two independent modules:a calculation module and a crack module.Furthermore,we propose feasible improvements to enhance the performance of the algorithm.Finally,we demonstrate the feasibility and effectiveness of the enhanced algorithm in the 3D-NMM using four numerical examples.This study establishes the potential of the 3DNMM,combined with the local tracking algorithm,for accurately modeling 3D crack propagation in brittle rock materials.展开更多
To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise mode...To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.展开更多
The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-...The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-fidelity models has been challenging due to the prohibitive computational costs.This paper presents an efficient parallel algorithm tailored for HVNM based on the Message Passing Interface standard.The algorithm evenly distributes the response matrix sets among processors during the matrix formation process,thus enabling independent construction without communication.Once the formation tasks are completed,a collective operation merges and shares the matrix sets among the processors.For the solution process,the problem domain is decomposed into subdomains assigned to specific processors,and the red-black Gauss-Seidel iteration is employed within each subdomain to solve the response matrix equation.Point-to-point communication is conducted between adjacent subdomains to exchange data along the boundaries.The accuracy and efficiency of the parallel algorithm are verified using the KAIST and JRR-3 test cases.Numerical results obtained with multiple processors agree well with those obtained from Monte Carlo calculations.The parallelization of HVNM results in eigenvalue errors of 31 pcm/-90 pcm and fission rate RMS errors of 1.22%/0.66%,respectively,for the 3D KAIST problem and the 3D JRR-3 problem.In addition,the parallel algorithm significantly reduces computation time,with an efficiency of 68.51% using 36 processors in the KAIST problem and 77.14% using 144 processors in the JRR-3 problem.展开更多
Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method bas...Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method based upon the BP neural network algorithm and three-dimensional finite element method(FEM)was developed.The FEM simulation results were compared with seismic records and the results show that the PGA and response spectra have a tendency to increase with increasing elevation,but the correlation between PGA amplification factors and slope is not obvious for low hills.New BP neural network models were established for the prediction of amplification factors of PGA and response spectra.Two kinds of input variables’combinations which are convenient to achieve are proposed in this paper for the prediction of amplification factors of PGA and response spectra,respectively.The absolute values of prediction errors can be mostly within 0.1 for PGA amplification factors,and they can be mostly within 0.2 for response spectra’s amplification factors.One input variables’combination can achieve better prediction performance while the other one has better expandability of the predictive region.Particularly,the BP models only employ one hidden layer with about a hundred nodes,which makes it efficient for training.展开更多
In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to...In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to the interval of the grid. To improve estimation accuracy and reduce the computational load, we propose a fast twodimensional positioning method for the crab pulsar based on multiple optimization algorithms(FTPCO). The FTPCO uses the Levenberg–Marquardt(LM) algorithm, three-point orientation(TPO) method, particle swarm optimization(PSO) and Newton–Raphson-based optimizer(NRBO) to substitute the grid method. First, to avoid the influence of the non-sensitive direction on positioning, we take an orbital error and the distortion of the pulsar profile as optimization objectives and combine the grid method with the LM algorithm or PSO to search for the non-sensitive direction. Then, on the sensitive plane perpendicular to the non-sensitive direction, the TPO method is proposed to fast search the sensitive direction and sub-sensitive direction. Finally, the NRBO is employed on the sensitive and sub-sensitive directions to achieve two-dimensional positioning of the Crab pulsar. The simulation results show that the computational load of the FTPCO is reduced by 89.4% and the positioning accuracy of the FTPCO is improved by approximately 38% compared with the grid method. The FTPCO has the advantage of high real-time accuracy and does not fall into the local optimum.展开更多
To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fract...To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.展开更多
Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times beca...Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times because they are easily affected by tides,currents,and other factors in the shallow sea environment during long-term acquisition.If uncorrected,then the imaging quality of subsequent processing will be affected.The conventional secondary positioning does not consider the case of multiple movements of the receivers,and the accuracy of secondary positioning is insufficient.The first arrival wave of OBN seismic data in shallow ocean mainly comprises refracted waves.In this study,a nonlinear model is established in accordance with the propagation mechanism of a refracted wave and its relationship with the time interval curve to realize the accurate location of multiple receiver movements.In addition,the Levenberg-Marquart algorithm is used to reduce the influence of the first arrival pickup error and to automatically detect the receiver movements,identifying the accurate dynamic relocation of the receivers.The simulation and field data show that the proposed method can realize the dynamic location of multiple receiver movements,thereby improving the accuracy of seismic imaging and achieving high practical value.展开更多
This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standar...This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standard system was established for comprehensive quality evaluation of HTD.There were obvious changes in the physicochemical properties,enzyme activities,and volatile flavor components at different storage periods,which affected the sensory evaluation of HTD to a certain extent.The results of high-throughput sequencing revealed significant microbial diversity,and showed that the bacterial community changed significantly more than did the fungal community.During the storage process,the dominant bacterial genera were Kroppenstedtia and Thermoascus.The correlation between dominant microorganisms and quality indicators highlighted their role in HTD quality.Lactococcus,Candida,Pichia,Paecilomyces,and protease activity played a crucial role in the formation of isovaleraldehyde.Acidic protease activity had the greatest impact on the microbial community.Moisture promoted isobutyric acid generation.Furthermore,the comprehensive quality evaluation standard system was established by the entropy weight method combined with multi-factor fuzzy mathematics.Consequently,this study provides innovative insights for comprehensive quality evaluation of HTD during storage and establishes a groundwork for scientific and rational storage of HTD and quality control of sauce-flavor Baijiu.展开更多
Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this cha...Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.展开更多
The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytica...The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vi...Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.展开更多
基金funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia via the FAMA project (research topic “Physics and chemistry with ion beams”)。
文摘Without knowing the emittance value, it is difficult to optimize ion beam optics for minimum beam loss during transmission, especially considering the very high emittance values of electron cyclotron resonance(ECR) ion sources.With this in mind, to measure the emittance of the ion beams produced by the mVINIS ECR, which is part of the FAMA facility at the Vin?a Institute of Nuclear Sciences, we have developed a pepper-pot scintillator screen system combined with a CMOS camera. The application, developed on the Lab VIEW platform, allows us to control the camera's main attribute settings, such as the shutter speed and the gain, record the images in the region of interest, and process and filter the images in real time. To analyze the data from the obtained image, we have developed an algorithm called measurement and analysis of ion beam luminosity(MAIBL) to reconstruct the four-dimensional(4D) beam profile and calculate the root mean square(RMS) emittance. Before measuring emittance, we performed a simulated experiment using the pepper-pot simulation(PPS) program. An exported file(PPS) gives a numerically generated raw image(mock image) of a beam with a predefined emittance value after it has passed through a pepper-pot mask. By analyzing data from mock images instead of the image obtained by the camera and putting it into the MAIBL algorithm, we can compare the calculated emittance with PPS's initial emittance value. In this paper, we present our computational tools and explain the method for verifying the correctness of the calculated emittance values.
基金co-funded by the China National Nuclear Corporation-State Key Laboratory of Nuclear Resources and Environment(East ChinaUniversity of Technology)Joint Innovation Fund Project(No.2023NRE-LH-08)the Natural Science Foundation of Jiangxi Province,China(No.20252BAC240270)+1 种基金the Funding of National Key Laboratory of Uranium Resources Exploration-Mining and Nuclear Remote Sensing(2025QZ-YZZ-08)the National Major Science and Technology Project on Deep Earth of China(No.2024ZD 1003300)。
文摘Seismic attributes encapsulate substantial reservoir characterization information and can effectively support reservoir prediction.Given the high-dimensional nonlinear between sandbodies and seismic attributes,this study employs the RFECV method for seismic attribute selection,inputting the optimized attributes into a LightGBM model to enhance spatial delineation of sandbody identification.By constructing training datasets based on optimized seismic attributes and well logs,followed by class imbalance correction as input variables for machine learning models,with sandbody probability as the output variable,and employing grid search to optimize model parameters,a high-precision sandbody prediction model was established.Taking the 3D seismic data of Block F3 in the North Sea of Holland as an example,this method successfully depicted the three-dimensional spatial distribution of target formation sandstones.The results indicate that even under strong noise conditions,the multi-attribute sandbody identification method based on LightGBM effectively characterizes the distribution features of sandbodies.Compared to unselected attributes,the prediction results using selected attributes have higher vertical resolution and inter-well conformity,with the prediction accuracy for single wells reaching 80.77%,significantly improving the accuracy of sandbody boundary delineation.
基金Supported by Shanxi Province Natural Science Research(202203021212249)Special/Youth Foundation of Taiyuan University of Technology(2022QN101)+3 种基金National Natural Science Foundation of China(12301556)Research Project Supported by Shanxi Scholarship Council of China(2021-029)International Cooperation Base and Platform Project of Shanxi Province(202104041101019)Basic Research Plan of Shanxi Province(202203021211129)。
文摘To enhance the computational efficiency of spatio-temporally discretized phase-field models,we present a high-speed solver specifically designed for the Poisson equations,a component frequently used in the numerical computation of such models.This efficient solver employs algorithms based on discrete cosine transformations(DCT)or discrete sine transformations(DST)and is not restricted by any spatio-temporal schemes.Our proposed methodology is appropriate for a variety of phase-field models and is especially efficient when combined with flow field systems.Meanwhile,this study has conducted an extensive numerical comparison and found that employing DCT and DST techniques not only yields results comparable to those obtained via the Multigrid(MG)method,a conventional approach used in the resolution of the Poisson equations,but also enhances computational efficiency by over 90%.
基金funded by the General Program of the National Natural Science Foundation of China grant number[62277022].
文摘Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.
文摘Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry.
基金supported by the National Natural Science Foundation of China(Grant Nos.42172312 and 52211540395)support from the Institut Universitaire de France(IUF).
文摘The modeling of crack growth in three-dimensional(3D)space poses significant challenges in rock mechanics due to the complex numerical computation involved in simulating crack propagation and interaction in rock materials.In this study,we present a novel approach that introduces a 3D numerical manifold method(3D-NMM)with a geometric kernel to enhance computational efficiency.Specifically,the maximum tensile stress criterion is adopted as a crack growth criterion to achieve strong discontinuous crack growth,and a local crack tracking algorithm and an angle correction technique are incorporated to address minor limitations of the algorithm in a 3D model.The implementation of the program is carried out in Python,using object-oriented programming in two independent modules:a calculation module and a crack module.Furthermore,we propose feasible improvements to enhance the performance of the algorithm.Finally,we demonstrate the feasibility and effectiveness of the enhanced algorithm in the 3D-NMM using four numerical examples.This study establishes the potential of the 3DNMM,combined with the local tracking algorithm,for accurately modeling 3D crack propagation in brittle rock materials.
基金co-supported by the National Natural Science Foundation of China(Nos.52405293,52375237)China Postdoctoral Science Foundation(No.2024M754219)Shaanxi Province Postdoctoral Research Project Funding,China。
文摘To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.
基金supported by the National Key Research and Development Program of China(No.2020YFB1901900)the National Natural Science Foundation of China(Nos.U20B2011,12175138)the Shanghai Rising-Star Program。
文摘The heterogeneous variational nodal method(HVNM)has emerged as a potential approach for solving high-fidelity neutron transport problems.However,achieving accurate results with HVNM in large-scale problems using high-fidelity models has been challenging due to the prohibitive computational costs.This paper presents an efficient parallel algorithm tailored for HVNM based on the Message Passing Interface standard.The algorithm evenly distributes the response matrix sets among processors during the matrix formation process,thus enabling independent construction without communication.Once the formation tasks are completed,a collective operation merges and shares the matrix sets among the processors.For the solution process,the problem domain is decomposed into subdomains assigned to specific processors,and the red-black Gauss-Seidel iteration is employed within each subdomain to solve the response matrix equation.Point-to-point communication is conducted between adjacent subdomains to exchange data along the boundaries.The accuracy and efficiency of the parallel algorithm are verified using the KAIST and JRR-3 test cases.Numerical results obtained with multiple processors agree well with those obtained from Monte Carlo calculations.The parallelization of HVNM results in eigenvalue errors of 31 pcm/-90 pcm and fission rate RMS errors of 1.22%/0.66%,respectively,for the 3D KAIST problem and the 3D JRR-3 problem.In addition,the parallel algorithm significantly reduces computation time,with an efficiency of 68.51% using 36 processors in the KAIST problem and 77.14% using 144 processors in the JRR-3 problem.
基金supported by the National Natural Science Foundation of China(No.51878625)the Collaboratory for the Study of Earthquake Predictability in China Seismic Experimental Site(No.2018YFE0109700)the General Scientific Research Foundation of Shandong Earthquake Agency(No.YB2208).
文摘Topography can strongly affect ground motion,and studies of the quantification of hill surfaces’topographic effect are relatively rare.In this paper,a new quantitative seismic topographic effect prediction method based upon the BP neural network algorithm and three-dimensional finite element method(FEM)was developed.The FEM simulation results were compared with seismic records and the results show that the PGA and response spectra have a tendency to increase with increasing elevation,but the correlation between PGA amplification factors and slope is not obvious for low hills.New BP neural network models were established for the prediction of amplification factors of PGA and response spectra.Two kinds of input variables’combinations which are convenient to achieve are proposed in this paper for the prediction of amplification factors of PGA and response spectra,respectively.The absolute values of prediction errors can be mostly within 0.1 for PGA amplification factors,and they can be mostly within 0.2 for response spectra’s amplification factors.One input variables’combination can achieve better prediction performance while the other one has better expandability of the predictive region.Particularly,the BP models only employ one hidden layer with about a hundred nodes,which makes it efficient for training.
基金supported by the National Natural Science Foundation of China (Nos. 61873196 and 62373030)the Innovation Program for Quantum Science and Technology(No. 2021ZD0303400)。
文摘In the two-dimensional positioning method of pulsars, the grid method is used to provide non-sensitive direction and positional estimates. However, the grid method has a high computational load and low accuracy due to the interval of the grid. To improve estimation accuracy and reduce the computational load, we propose a fast twodimensional positioning method for the crab pulsar based on multiple optimization algorithms(FTPCO). The FTPCO uses the Levenberg–Marquardt(LM) algorithm, three-point orientation(TPO) method, particle swarm optimization(PSO) and Newton–Raphson-based optimizer(NRBO) to substitute the grid method. First, to avoid the influence of the non-sensitive direction on positioning, we take an orbital error and the distortion of the pulsar profile as optimization objectives and combine the grid method with the LM algorithm or PSO to search for the non-sensitive direction. Then, on the sensitive plane perpendicular to the non-sensitive direction, the TPO method is proposed to fast search the sensitive direction and sub-sensitive direction. Finally, the NRBO is employed on the sensitive and sub-sensitive directions to achieve two-dimensional positioning of the Crab pulsar. The simulation results show that the computational load of the FTPCO is reduced by 89.4% and the positioning accuracy of the FTPCO is improved by approximately 38% compared with the grid method. The FTPCO has the advantage of high real-time accuracy and does not fall into the local optimum.
基金funded by the project of the Major Scientific and Technological Projects of CNOOC in the 14th Five-Year Plan(No.KJGG2022-0701)the CNOOC Research Institute(No.2020PFS-03).
文摘To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.
基金funded by the National Natural Science Foundation of China (No.42074140)the Scientific Research and Technology Development Project of China National Petroleum Corporation (No.2021ZG02)。
文摘Ocean bottom node(OBN)data acquisition is the main development direction of marine seismic exploration;it is widely promoted,especially in shallow sea environments.However,the OBN receivers may move several times because they are easily affected by tides,currents,and other factors in the shallow sea environment during long-term acquisition.If uncorrected,then the imaging quality of subsequent processing will be affected.The conventional secondary positioning does not consider the case of multiple movements of the receivers,and the accuracy of secondary positioning is insufficient.The first arrival wave of OBN seismic data in shallow ocean mainly comprises refracted waves.In this study,a nonlinear model is established in accordance with the propagation mechanism of a refracted wave and its relationship with the time interval curve to realize the accurate location of multiple receiver movements.In addition,the Levenberg-Marquart algorithm is used to reduce the influence of the first arrival pickup error and to automatically detect the receiver movements,identifying the accurate dynamic relocation of the receivers.The simulation and field data show that the proposed method can realize the dynamic location of multiple receiver movements,thereby improving the accuracy of seismic imaging and achieving high practical value.
文摘This study investigated the physicochemical properties,enzyme activities,volatile flavor components,microbial communities,and sensory evaluation of high-temperature Daqu(HTD)during the maturation process,and a standard system was established for comprehensive quality evaluation of HTD.There were obvious changes in the physicochemical properties,enzyme activities,and volatile flavor components at different storage periods,which affected the sensory evaluation of HTD to a certain extent.The results of high-throughput sequencing revealed significant microbial diversity,and showed that the bacterial community changed significantly more than did the fungal community.During the storage process,the dominant bacterial genera were Kroppenstedtia and Thermoascus.The correlation between dominant microorganisms and quality indicators highlighted their role in HTD quality.Lactococcus,Candida,Pichia,Paecilomyces,and protease activity played a crucial role in the formation of isovaleraldehyde.Acidic protease activity had the greatest impact on the microbial community.Moisture promoted isobutyric acid generation.Furthermore,the comprehensive quality evaluation standard system was established by the entropy weight method combined with multi-factor fuzzy mathematics.Consequently,this study provides innovative insights for comprehensive quality evaluation of HTD during storage and establishes a groundwork for scientific and rational storage of HTD and quality control of sauce-flavor Baijiu.
基金funded by the National Key R&D Program of China(Grant No.2022YFC2903904)the National Natural Science Foundation of China(Grant Nos.51904057 and U1906208).
文摘Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.
基金supported by the National Natural Science Foundation of China(12172023).
文摘The separation-of-variable(SOV)methods,such as the improved SOV method,the variational SOV method,and the extended SOV method,have been proposed by the present authors and coworkers to obtain the closed-form analytical solutions for free vibration and eigenbuckling of rectangular plates and circular cylindrical shells.By taking the free vibration of rectangular thin plates as an example,this work presents the theoretical framework of the SOV methods in an instructive way,and the bisection–based solution procedures for a group of nonlinear eigenvalue equations.Besides,the explicit equations of nodal lines of the SOV methods are presented,and the relations of nodal line patterns and frequency orders are investigated.It is concluded that the highly accurate SOV methods have the same accuracy for all frequencies,the mode shapes about repeated frequencies can also be precisely captured,and the SOV methods do not have the problem of missing roots as well.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金funded by the National Natural Science Foundation of China(No.41962016)the Natural Science Foundation of NingXia(Nos.2023AAC02023,2023A1218,and 2021AAC02006).
文摘Soil improvement is one of the most important issues in geotechnical engineering practice.The wide application of traditional improvement techniques(cement/chemical materials)are limited due to damage ecological en-vironment and intensify carbon emissions.However,the use of microbially induced calcium carbonate pre-cipitation(MICP)to obtain bio-cement is a novel technique with the potential to induce soil stability,providing a low-carbon,environment-friendly,and sustainable integrated solution for some geotechnical engineering pro-blems in the environment.This paper presents a comprehensive review of the latest progress in soil improvement based on the MICP strategy.It systematically summarizes and overviews the mineralization mechanism,influ-encing factors,improved methods,engineering characteristics,and current field application status of the MICP.Additionally,it also explores the limitations and correspondingly proposes prospective applications via the MICP approach for soil improvement.This review indicates that the utilization of different environmental calcium-based wastes in MICP and combination of materials and MICP are conducive to meeting engineering and market demand.Furthermore,we recommend and encourage global collaborative study and practice with a view to commercializing MICP technique in the future.The current review purports to provide insights for engineers and interdisciplinary researchers,and guidance for future engineering applications.