An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice a...An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice are preserved. All the unifying axes are parallel, and the other axes have indeterminate mutual relations. The two kinds of axes are non-interchangeable resembling time and space of reality. The unification constitutes a framework without spatial properties. In case the axes with indeterminate relations are present at regular intervals in the time and the space, a Euclidean-like metric and goniometry can be obtained. In thus defined space-like structure, differences in speed and relativistic relations are only possible within regions of space enclosed by aberrations of the structure.展开更多
A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property...A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property. Each group is determined by the number of lattices it comprises, forming the characterizing numbers of the space. Using the specific properties of a multidirectional discrete space, it is shown that some of the characterizing numbers can be associated with a physical constant. The fine structure constant appears to be equal to the ratio of two of these numbers, which offers the possibility of calculating the series of smallest numerical values of these numbers. With these values, a reasoned estimate can be made of the upper limit of the smallest distance of the discrete space of approximately the Planck length.展开更多
The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field mus...The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field must to be split into an inner and an outer part. The inner part moves gradually in a point particle-like fashion, the outer is moving step-wise in a wave-like manner. This dual behaviour is reminiscent of the particle-wave duality. Field granulation can be caused by deviations of the structure of the lattice at the boundaries of the granule, causing some axes of the granule to be tilted. The granules exhibit relativistic effects, inter alia, caused by the universality of the coordination number of the lattice.展开更多
For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube samplin...For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.展开更多
With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Spar...With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Sparrow Search Algorithm(SSA)is proposed for the seismic source parameter inversion problem.By replacing the original population generation in the improved algorithm with Latin hypercubic sampling,the Sparrow Search Algorithm reduces the repetition of samples in the population initialization.Subsequently,the algorithm introduces adaptive weights in the discoverer generation phase of the sparrow algorithm and combines the Levy flight strategy to make the algorithm more comprehensive and improve the search accuracy during the whole iteration process.Therefore,the improved Latin hypercube-based sparrow search algorithm(ILHSSA)has better advantages in terms of iterative convergence speed and stability.In order to verify the performance of ILHSSA,the basic genetic algorithm(GA)and sparrow search algorithm(SSA)are examined and compared with ILHSSA by simulated earthquakes of two different earthquake types.The simulation experiments show that the improved algorithm ILHSSA outperforms SSA in accuracy and stability.Compared with the GA algorithm,ILHSSA can achieve the same inversion accuracy as GA,and it even surpasses GA in inversion speed and the inversion results of some parameters,demonstrating better stability.Finally,the improved algorithm is used for the 2017 Bodrum-Cos earthquake and the 2016 Amatrice earthquake in Italy.The inversion results all reflect the practicality and reliability of the improved algorithm.展开更多
Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliab...Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whos...Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.展开更多
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the bla...In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.展开更多
Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and re...Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.展开更多
The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testi...The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testing phase during which processors test each other is discussed. The diagnosabilities of exchanged hypercubes are studied by using the pessimistic one-step diagno- sis strategy under two kinds of diagnosis models: the PMC model and the MM model. The main results presented here are the two proofs that the degree of diagnosability of the EH(s, t) under pessimistic one-step tl/tl fault diagnosis strategy is 2s where I ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the PMC model and that it is also 2s where 1 ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the MM* model.展开更多
Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collabora...Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.展开更多
High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis mode...High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.展开更多
Let Qn,k (n 〉 3, 1 〈 k ≤ n - 1) be an n-dimensional enhanced hypercube which is an attractive variant of the hypercube and can be obtained by adding some complementary edges, fv and fe be the numbers of faulty ve...Let Qn,k (n 〉 3, 1 〈 k ≤ n - 1) be an n-dimensional enhanced hypercube which is an attractive variant of the hypercube and can be obtained by adding some complementary edges, fv and fe be the numbers of faulty vertices and faulty edges, respectively. In this paper, we give three main results. First, a fault-free path P[u, v] of length at least 2n - 2fv - 1 (respectively, 2n - 2fv - 2) can be embedded on Qn,k with fv + f≤ n- 1 when dQn,k (u, v) is odd (respectively, dQ,~,k (u, v) is even). Secondly, an Q,,k is (n - 2) edgefault-free hyper Hamiltonianaceable when n ( 3) and k have the same parity. Lastly, a fault-free cycle of length at least 2n - 2fv can be embedded on Qn,k with f~ 〈 n - 1 and fv+f≤2n-4.展开更多
文摘An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice are preserved. All the unifying axes are parallel, and the other axes have indeterminate mutual relations. The two kinds of axes are non-interchangeable resembling time and space of reality. The unification constitutes a framework without spatial properties. In case the axes with indeterminate relations are present at regular intervals in the time and the space, a Euclidean-like metric and goniometry can be obtained. In thus defined space-like structure, differences in speed and relativistic relations are only possible within regions of space enclosed by aberrations of the structure.
文摘A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property. Each group is determined by the number of lattices it comprises, forming the characterizing numbers of the space. Using the specific properties of a multidirectional discrete space, it is shown that some of the characterizing numbers can be associated with a physical constant. The fine structure constant appears to be equal to the ratio of two of these numbers, which offers the possibility of calculating the series of smallest numerical values of these numbers. With these values, a reasoned estimate can be made of the upper limit of the smallest distance of the discrete space of approximately the Planck length.
文摘The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field must to be split into an inner and an outer part. The inner part moves gradually in a point particle-like fashion, the outer is moving step-wise in a wave-like manner. This dual behaviour is reminiscent of the particle-wave duality. Field granulation can be caused by deviations of the structure of the lattice at the boundaries of the granule, causing some axes of the granule to be tilted. The granules exhibit relativistic effects, inter alia, caused by the universality of the coordination number of the lattice.
基金co-supported by the National Natural Science Foundation of China(Nos.51875014,U2233212 and 51875015)the Natural Science Foundation of Beijing Municipality,China(No.L221008)+1 种基金Science,Technology Innovation 2025 Major Project of Ningbo of China(No.2022Z005)the Tianmushan Laboratory Project,China(No.TK2023-B-001)。
文摘For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.
基金funded by the National Natural Science Foundation of China(42174011).
文摘With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Sparrow Search Algorithm(SSA)is proposed for the seismic source parameter inversion problem.By replacing the original population generation in the improved algorithm with Latin hypercubic sampling,the Sparrow Search Algorithm reduces the repetition of samples in the population initialization.Subsequently,the algorithm introduces adaptive weights in the discoverer generation phase of the sparrow algorithm and combines the Levy flight strategy to make the algorithm more comprehensive and improve the search accuracy during the whole iteration process.Therefore,the improved Latin hypercube-based sparrow search algorithm(ILHSSA)has better advantages in terms of iterative convergence speed and stability.In order to verify the performance of ILHSSA,the basic genetic algorithm(GA)and sparrow search algorithm(SSA)are examined and compared with ILHSSA by simulated earthquakes of two different earthquake types.The simulation experiments show that the improved algorithm ILHSSA outperforms SSA in accuracy and stability.Compared with the GA algorithm,ILHSSA can achieve the same inversion accuracy as GA,and it even surpasses GA in inversion speed and the inversion results of some parameters,demonstrating better stability.Finally,the improved algorithm is used for the 2017 Bodrum-Cos earthquake and the 2016 Amatrice earthquake in Italy.The inversion results all reflect the practicality and reliability of the improved algorithm.
基金Sichuan Science and Technology Program under Grant No.2024NSFSC0932the National Natural Science Foundation of China under Grant No.52008047。
文摘Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
基金Supported by the National Natural Science Foundation of China under Grant No.69933020 (国家自然科学基金) the Natural Science Foundation of Shandong Province of China under Grant No.Y2002G03 (山东省自然科学基金)
文摘Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.
基金Supported by Jiangsu Provincical Natural Science Foundation of China(Grant No.BK20140554)National Natural Science Foundation of China(Grant No.51409123)+2 种基金China Postdoctoral Science Foundation(Grant No.2015T80507)Innovation Project for Postgraduates of Jiangsu Province,China(Grant No.KYLX15_1066)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
文摘Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.
基金supported by the National Natural Science Fundation of China(61363002)
文摘The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testing phase during which processors test each other is discussed. The diagnosabilities of exchanged hypercubes are studied by using the pessimistic one-step diagno- sis strategy under two kinds of diagnosis models: the PMC model and the MM model. The main results presented here are the two proofs that the degree of diagnosability of the EH(s, t) under pessimistic one-step tl/tl fault diagnosis strategy is 2s where I ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the PMC model and that it is also 2s where 1 ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the MM* model.
文摘Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.
基金supported by National Natural Science Foundation of China (Grant Nos. 50875024,51105040)Excellent Young Scholars Research Fund of Beijing Institute of Technology,China (Grant No.2010Y0102)Defense Creative Research Group Foundation of China(Grant No. GFTD0803)
文摘High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.
基金supported by NSFC (11071096, 11171129)NSF of Hubei Province, China (T201103)
文摘Let Qn,k (n 〉 3, 1 〈 k ≤ n - 1) be an n-dimensional enhanced hypercube which is an attractive variant of the hypercube and can be obtained by adding some complementary edges, fv and fe be the numbers of faulty vertices and faulty edges, respectively. In this paper, we give three main results. First, a fault-free path P[u, v] of length at least 2n - 2fv - 1 (respectively, 2n - 2fv - 2) can be embedded on Qn,k with fv + f≤ n- 1 when dQn,k (u, v) is odd (respectively, dQ,~,k (u, v) is even). Secondly, an Q,,k is (n - 2) edgefault-free hyper Hamiltonianaceable when n ( 3) and k have the same parity. Lastly, a fault-free cycle of length at least 2n - 2fv can be embedded on Qn,k with f~ 〈 n - 1 and fv+f≤2n-4.