For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube samplin...For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.展开更多
Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collabora...Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.展开更多
The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Lo...The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.展开更多
The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite ...The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.展开更多
Latin hypercube designs(LHDs)are very popular in designing computer experiments.In addition,orthogonality is a desirable property for LHDs,as it allows the estimates of the main effects in linear models to be uncorrel...Latin hypercube designs(LHDs)are very popular in designing computer experiments.In addition,orthogonality is a desirable property for LHDs,as it allows the estimates of the main effects in linear models to be uncorrelated with each other,and is a stepping stone to the space-filling property for fitting Gaussian process models.Among the available methods for constructing orthogonal Latin hypercube designs(OLHDs),the rotation method is particularly attractive due to its theoretical elegance as well as its contribution to spacefilling properties in low-dimensional projections.This paper proposes a new rotation method for constructing OLHDs and nearly OLHDs with flexible run sizes that cannot be obtained by existing methods.Furthermore,the resulting OLHDs are improved in terms of the maximin distance criterion and the alias matrices and a new kind of orthogonal designs are constructed.Theoretical properties as well as construction algorithms are provided.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliab...Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.展开更多
This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accur...This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.展开更多
The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this a...The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.展开更多
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the bla...In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.展开更多
Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and re...Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.展开更多
High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis mode...High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.展开更多
This paper presents an actuator used for the trajectory correction fuze,which is subject to high impact loadings during launch.A simulation method is carried out to obtain the peak-peak stress value of each component,...This paper presents an actuator used for the trajectory correction fuze,which is subject to high impact loadings during launch.A simulation method is carried out to obtain the peak-peak stress value of each component,from which the ball bearings are possible failures according to the results.Subsequently,three schemes against impact loadings,full-element deep groove ball bearing and integrated raceway,needle roller thrust bearing assembly,and gaskets are utilized for redesigning the actuator to effectively reduce the bearings’stress.However,multi-objectives optimization still needs to be conducted for the gaskets to decrease the stress value further to the yield stress.Four gasket’s structure parameters and three bearings’peak-peak stress are served as the four optimization variables and three objectives,respectively.Optimized Latin hypercube design is used for generating sample points,and Kriging model selected according to estimation result can establish the relationship between the variables and objectives,representing the simulation which is time-consuming.Accordingly,two optimization algorithms work out the Pareto solutions,from which the best solutions are selected,and verified by the simulation to determine the gaskets optimized structure parameters.It can be concluded that the simulation and optimization method based on these components is effective and efficient.展开更多
This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The fa...This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.展开更多
In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety o...In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.展开更多
To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-sol...To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.展开更多
Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assi...Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.展开更多
Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including sour...Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.展开更多
The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecti...The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.展开更多
Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatia...Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.展开更多
基金co-supported by the National Natural Science Foundation of China(Nos.51875014,U2233212 and 51875015)the Natural Science Foundation of Beijing Municipality,China(No.L221008)+1 种基金Science,Technology Innovation 2025 Major Project of Ningbo of China(No.2022Z005)the Tianmushan Laboratory Project,China(No.TK2023-B-001)。
文摘For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.
文摘Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.
文摘The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.
基金supported by the National Natural Science Foundation of China(Grant Nos.62376089,62302153,62302154)the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)+1 种基金the Young and Middle-Aged Scientific and Technological Innovation Team Plan in Higher Education Institutions in Hubei Province,China(Grant No.T2023007)the National Natural Science Foundation of China(Grant No.U23A20318).
文摘The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.
基金supported by National Natural Science Foundation of China(Grant Nos.12131001 and 11871288)National Ten Thousand Talents Program and the 111 Project B20016。
文摘Latin hypercube designs(LHDs)are very popular in designing computer experiments.In addition,orthogonality is a desirable property for LHDs,as it allows the estimates of the main effects in linear models to be uncorrelated with each other,and is a stepping stone to the space-filling property for fitting Gaussian process models.Among the available methods for constructing orthogonal Latin hypercube designs(OLHDs),the rotation method is particularly attractive due to its theoretical elegance as well as its contribution to spacefilling properties in low-dimensional projections.This paper proposes a new rotation method for constructing OLHDs and nearly OLHDs with flexible run sizes that cannot be obtained by existing methods.Furthermore,the resulting OLHDs are improved in terms of the maximin distance criterion and the alias matrices and a new kind of orthogonal designs are constructed.Theoretical properties as well as construction algorithms are provided.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
基金Sichuan Science and Technology Program under Grant No.2024NSFSC0932the National Natural Science Foundation of China under Grant No.52008047。
文摘Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.
文摘This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.
文摘The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.
基金Supported by Jiangsu Provincical Natural Science Foundation of China(Grant No.BK20140554)National Natural Science Foundation of China(Grant No.51409123)+2 种基金China Postdoctoral Science Foundation(Grant No.2015T80507)Innovation Project for Postgraduates of Jiangsu Province,China(Grant No.KYLX15_1066)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
文摘Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.
基金supported by National Natural Science Foundation of China (Grant Nos. 50875024,51105040)Excellent Young Scholars Research Fund of Beijing Institute of Technology,China (Grant No.2010Y0102)Defense Creative Research Group Foundation of China(Grant No. GFTD0803)
文摘High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.
基金The authors would like to acknowledge National Defense Pre-Research Foundation of China(Grant No.41419030102)to provide fund for conducting experiments.
文摘This paper presents an actuator used for the trajectory correction fuze,which is subject to high impact loadings during launch.A simulation method is carried out to obtain the peak-peak stress value of each component,from which the ball bearings are possible failures according to the results.Subsequently,three schemes against impact loadings,full-element deep groove ball bearing and integrated raceway,needle roller thrust bearing assembly,and gaskets are utilized for redesigning the actuator to effectively reduce the bearings’stress.However,multi-objectives optimization still needs to be conducted for the gaskets to decrease the stress value further to the yield stress.Four gasket’s structure parameters and three bearings’peak-peak stress are served as the four optimization variables and three objectives,respectively.Optimized Latin hypercube design is used for generating sample points,and Kriging model selected according to estimation result can establish the relationship between the variables and objectives,representing the simulation which is time-consuming.Accordingly,two optimization algorithms work out the Pareto solutions,from which the best solutions are selected,and verified by the simulation to determine the gaskets optimized structure parameters.It can be concluded that the simulation and optimization method based on these components is effective and efficient.
基金supported by the National Natural Science Foundation of China (No. 51779236)the NSFC- Shandong Joint Fund Project (No. U1706226)the National Key Research and Development Program (No. 2016YFC 0303401)
文摘This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.
文摘In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.
基金Youth Science and Technology Fund Project of Gansu Province(No.18JR3RA011)Major Projects in Gansu Province(No.17ZD2GA010)+1 种基金Science and Technology Projects Funding of State Grid Corporation(No.522727160001)Science and Technology Projects of State Grid Gansu Electric Power Company(No.52272716000K)
文摘To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.
基金supported by the Funds of the Nanjing Institute of Technology (Grants No. JCYJ201619 and ZKJ201804).
文摘Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.
基金This work was supported by Major Science and Technology Program for Water Pollution Control and Treatment(No.2015ZX07406005)Also thanks to the National Natural Science Foundation of China(No.41430643 and No.51774270)the National Key Research&Development Plan(No.2016YFC0501109).
文摘Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.
基金supported by the National Natural Science Foundation of China(Grant No.52079120).
文摘The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.
基金funded by the Natural Science and Engineering Research Council (NSERC) of Canada (No. RGPIN-2014-04100)
文摘Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.