The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Lo...The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliab...Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.展开更多
Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including sour...Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.展开更多
Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatia...Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.展开更多
Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of ...Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.展开更多
Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs an...Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs and efforts associated with sampling, profile description, and laboratory analysis. The purpose of this study was to compare common sampling designs for DSM, including grid sampling (GS), grid random sampling (GRS), stratified random sampling (StRS), and conditioned Latin hypercube sampling (cLHS). In an agricultural field (11 ha) in Quebec, Canada, a total of unique 118 locations were selected using each of the four sampling designs (45 locations each), and additional 30 sample locations were selected as an independent testing dataset (evaluation dataset). Soil visible near-infrared (Vis-NIR) spectra were collected in situ at the 148 locations (1 m depth), and soil cores were collected from a subset of 32 locations and subdivided at 10-cm depth intervals, totaling 251 samples. The Cubist model was used to elucidate the relationship between Vis-NIR spectra and soil properties (soil organic matter (SOM) and clay), which was then used to predict the soil properties at all 148 sample locations. Digital maps of soil properties at multiple depths for the entire field (148 sample locations) were prepared using a quantile random forest model to obtain complete model maps (CM-maps). Soil properties were also mapped using the samples from each of the 45 locations for each sampling design to obtain sampling design maps (SD-maps). The SD-maps were evaluated using the independent testing dataset (30 sample locations), and the spatial distribution and model uncertainty of each SD-map were compared with those of the corresponding CM-map. The spatial and feature space coverage were compared across the four sampling designs. The results showed that GS resulted in the most even spatial coverage, cLHS resulted in the best coverage of the feature space, and GS and cLHS resulted in similar prediction accuracies and spatial distributions of soil properties. The SOM content was underestimated using GRS, with large errors at 0–50 cm depth, due to some values not being captured by this sampling design, whereas larger errors for the deeper soil layers were produced using StRS. Predictions of SOM and clay contents had higher accuracy for topsoil (0–30 cm) than for deep subsoil (60–100 cm). It was concluded that the soil sampling designs with either good spatial coverage or feature space coverage can provide good accuracy in 3D DSM, but their performances may be different for different soil properties.展开更多
The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite ...The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.展开更多
This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accur...This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.展开更多
The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this a...The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.展开更多
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the bla...In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.展开更多
This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The fa...This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.展开更多
In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety o...In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.展开更多
To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-sol...To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.展开更多
Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assi...Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.展开更多
The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecti...The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.展开更多
A surrogate model is introduced for identifying the optimal remediation strategy for Dense Non-Aqueous Phase Liquids (DNAPL)-contaminated aquifers. A Latin hypercube sampling (LHS) method was used to collect data ...A surrogate model is introduced for identifying the optimal remediation strategy for Dense Non-Aqueous Phase Liquids (DNAPL)-contaminated aquifers. A Latin hypercube sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the multi-phase flow simulation model was developed using a radial basis function artificial neural network (RBFANN). The developed model was applied to a perchloroethylene (PCE)-contaminated aquifer remediation optimization problem. The relative errors of the average PCE removal rates be- tween the surrogate model and simulation model for 10 validation samples were lower than 5%, which is high approximation accuracy. A comparison of the surrogate-based simulation optimization model and a conventional simulation optimization model indicated that RBFANN surrogate model developed in this paper considerably reduced the computational burden of simulation optimization processes.展开更多
This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated...This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated as a constraint multi-objective optimization problem, which is solved by a multi-objective genetic algorithm, NSGA-II. Monte Carlo Simulation (MCS) method, combined with Latin Hypercube Sampling (LHS), is applied to evaluate the stochastic system performance. The potential of the proposed method has been demonstrated by a conceptual system design. A comparative study between the proposed robust method and the deterministic method presented in literature has been conducted. The results indicate that the proposed method can find a large mount of Pareto optimal system configurations with better compromising performance than the deterministic method. The trade-off information may be derived by a systematical comparison of these configurations. The proposed robust design method should be useful for hybrid power systems that require both optimality and robustness.展开更多
This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube s...This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube sampling technique is adopted to generate input datasets for establishing an ANN model;the random finite element method is then utilized to calculate the corresponding output datasets considering the spatial variability of soil properties;and finally,an ANN model is trained to construct the response surface of failure probability and obtain an approximate function that incorporates the relevant variables.The results of the illustrated example indicate that the proposed method provides credible and accurate estimations of failure probability.As a result,the obtained approximate function can be used as an alternative to the specific analysis process in c-φslope reliability analyses.展开更多
As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To im...As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To improve the leakage scenario detection rate and reduce the cumulative risk value,this paper presents an optimization method of the gas detector placement.The probability density distribution and cumulative probability density distribution of the leakage source variables and environmental variables were calculated based on the Offshore Reliability Data and the statistical data of the relevant leakage variables.A potential leakage sce-nario set was constructed using Latin hypercube sampling.The typical FPSO leakage scenarios were analyzed through computational fluid dynamics(CFD),and the impacts of different parameters on the leakage were addressed.A series of detectors was deployed according to the simulation results.The minimization of the product of effective detection time and gas leakage volume was the risk optimization objective,and the location and number of detectors were taken as decision variables.A greedy extraction heuristic algo-rithm was used to solve the optimization problem.The results show that the optimized placement had a better monitoring effect on the leakage scenario.展开更多
文摘The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
基金Sichuan Science and Technology Program under Grant No.2024NSFSC0932the National Natural Science Foundation of China under Grant No.52008047。
文摘Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.
基金This work was supported by Major Science and Technology Program for Water Pollution Control and Treatment(No.2015ZX07406005)Also thanks to the National Natural Science Foundation of China(No.41430643 and No.51774270)the National Key Research&Development Plan(No.2016YFC0501109).
文摘Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.
基金funded by the Natural Science and Engineering Research Council (NSERC) of Canada (No. RGPIN-2014-04100)
文摘Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.
文摘Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.
基金the National Science and Engineering Research Council of Canada(No.RGPIN-2014-04100)for funding this project.
文摘Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs and efforts associated with sampling, profile description, and laboratory analysis. The purpose of this study was to compare common sampling designs for DSM, including grid sampling (GS), grid random sampling (GRS), stratified random sampling (StRS), and conditioned Latin hypercube sampling (cLHS). In an agricultural field (11 ha) in Quebec, Canada, a total of unique 118 locations were selected using each of the four sampling designs (45 locations each), and additional 30 sample locations were selected as an independent testing dataset (evaluation dataset). Soil visible near-infrared (Vis-NIR) spectra were collected in situ at the 148 locations (1 m depth), and soil cores were collected from a subset of 32 locations and subdivided at 10-cm depth intervals, totaling 251 samples. The Cubist model was used to elucidate the relationship between Vis-NIR spectra and soil properties (soil organic matter (SOM) and clay), which was then used to predict the soil properties at all 148 sample locations. Digital maps of soil properties at multiple depths for the entire field (148 sample locations) were prepared using a quantile random forest model to obtain complete model maps (CM-maps). Soil properties were also mapped using the samples from each of the 45 locations for each sampling design to obtain sampling design maps (SD-maps). The SD-maps were evaluated using the independent testing dataset (30 sample locations), and the spatial distribution and model uncertainty of each SD-map were compared with those of the corresponding CM-map. The spatial and feature space coverage were compared across the four sampling designs. The results showed that GS resulted in the most even spatial coverage, cLHS resulted in the best coverage of the feature space, and GS and cLHS resulted in similar prediction accuracies and spatial distributions of soil properties. The SOM content was underestimated using GRS, with large errors at 0–50 cm depth, due to some values not being captured by this sampling design, whereas larger errors for the deeper soil layers were produced using StRS. Predictions of SOM and clay contents had higher accuracy for topsoil (0–30 cm) than for deep subsoil (60–100 cm). It was concluded that the soil sampling designs with either good spatial coverage or feature space coverage can provide good accuracy in 3D DSM, but their performances may be different for different soil properties.
基金supported by the National Natural Science Foundation of China(Grant Nos.62376089,62302153,62302154)the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)+1 种基金the Young and Middle-Aged Scientific and Technological Innovation Team Plan in Higher Education Institutions in Hubei Province,China(Grant No.T2023007)the National Natural Science Foundation of China(Grant No.U23A20318).
文摘The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.
文摘This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.
文摘The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.
基金Supported by Jiangsu Provincical Natural Science Foundation of China(Grant No.BK20140554)National Natural Science Foundation of China(Grant No.51409123)+2 种基金China Postdoctoral Science Foundation(Grant No.2015T80507)Innovation Project for Postgraduates of Jiangsu Province,China(Grant No.KYLX15_1066)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
基金supported by the National Natural Science Foundation of China (No. 51779236)the NSFC- Shandong Joint Fund Project (No. U1706226)the National Key Research and Development Program (No. 2016YFC 0303401)
文摘This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.
文摘In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.
基金Youth Science and Technology Fund Project of Gansu Province(No.18JR3RA011)Major Projects in Gansu Province(No.17ZD2GA010)+1 种基金Science and Technology Projects Funding of State Grid Corporation(No.522727160001)Science and Technology Projects of State Grid Gansu Electric Power Company(No.52272716000K)
文摘To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.
基金supported by the Funds of the Nanjing Institute of Technology (Grants No. JCYJ201619 and ZKJ201804).
文摘Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.
基金supported by the National Natural Science Foundation of China(Grant No.52079120).
文摘The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.
基金supported by the National Nature Science Foundation of China(No.41072171)China Geological Survey Project(No.1212011140027)
文摘A surrogate model is introduced for identifying the optimal remediation strategy for Dense Non-Aqueous Phase Liquids (DNAPL)-contaminated aquifers. A Latin hypercube sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the multi-phase flow simulation model was developed using a radial basis function artificial neural network (RBFANN). The developed model was applied to a perchloroethylene (PCE)-contaminated aquifer remediation optimization problem. The relative errors of the average PCE removal rates be- tween the surrogate model and simulation model for 10 validation samples were lower than 5%, which is high approximation accuracy. A comparison of the surrogate-based simulation optimization model and a conventional simulation optimization model indicated that RBFANN surrogate model developed in this paper considerably reduced the computational burden of simulation optimization processes.
文摘This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated as a constraint multi-objective optimization problem, which is solved by a multi-objective genetic algorithm, NSGA-II. Monte Carlo Simulation (MCS) method, combined with Latin Hypercube Sampling (LHS), is applied to evaluate the stochastic system performance. The potential of the proposed method has been demonstrated by a conceptual system design. A comparative study between the proposed robust method and the deterministic method presented in literature has been conducted. The results indicate that the proposed method can find a large mount of Pareto optimal system configurations with better compromising performance than the deterministic method. The trade-off information may be derived by a systematical comparison of these configurations. The proposed robust design method should be useful for hybrid power systems that require both optimality and robustness.
基金financially supported by the National Natural Science Foundation of China(Grant No.51278217)
文摘This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube sampling technique is adopted to generate input datasets for establishing an ANN model;the random finite element method is then utilized to calculate the corresponding output datasets considering the spatial variability of soil properties;and finally,an ANN model is trained to construct the response surface of failure probability and obtain an approximate function that incorporates the relevant variables.The results of the illustrated example indicate that the proposed method provides credible and accurate estimations of failure probability.As a result,the obtained approximate function can be used as an alternative to the specific analysis process in c-φslope reliability analyses.
基金support of the Fundamen-tal Research Funds for the Central Universities(No.3072021CF0101)the‘Integration Software of Offshore Float-ing Platform Engineering Design(No.2016YFC0302900)’from the Ministry of Science and Technology of China,and the Project of Development of Floating Offshore Wind Turbine Risk Assessment Software Project,funded by the International S&T Cooperation Program of China(No.2013DFE73060).
文摘As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To improve the leakage scenario detection rate and reduce the cumulative risk value,this paper presents an optimization method of the gas detector placement.The probability density distribution and cumulative probability density distribution of the leakage source variables and environmental variables were calculated based on the Offshore Reliability Data and the statistical data of the relevant leakage variables.A potential leakage sce-nario set was constructed using Latin hypercube sampling.The typical FPSO leakage scenarios were analyzed through computational fluid dynamics(CFD),and the impacts of different parameters on the leakage were addressed.A series of detectors was deployed according to the simulation results.The minimization of the product of effective detection time and gas leakage volume was the risk optimization objective,and the location and number of detectors were taken as decision variables.A greedy extraction heuristic algo-rithm was used to solve the optimization problem.The results show that the optimized placement had a better monitoring effect on the leakage scenario.