An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice a...An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice are preserved. All the unifying axes are parallel, and the other axes have indeterminate mutual relations. The two kinds of axes are non-interchangeable resembling time and space of reality. The unification constitutes a framework without spatial properties. In case the axes with indeterminate relations are present at regular intervals in the time and the space, a Euclidean-like metric and goniometry can be obtained. In thus defined space-like structure, differences in speed and relativistic relations are only possible within regions of space enclosed by aberrations of the structure.展开更多
A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property...A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property. Each group is determined by the number of lattices it comprises, forming the characterizing numbers of the space. Using the specific properties of a multidirectional discrete space, it is shown that some of the characterizing numbers can be associated with a physical constant. The fine structure constant appears to be equal to the ratio of two of these numbers, which offers the possibility of calculating the series of smallest numerical values of these numbers. With these values, a reasoned estimate can be made of the upper limit of the smallest distance of the discrete space of approximately the Planck length.展开更多
The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field mus...The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field must to be split into an inner and an outer part. The inner part moves gradually in a point particle-like fashion, the outer is moving step-wise in a wave-like manner. This dual behaviour is reminiscent of the particle-wave duality. Field granulation can be caused by deviations of the structure of the lattice at the boundaries of the granule, causing some axes of the granule to be tilted. The granules exhibit relativistic effects, inter alia, caused by the universality of the coordination number of the lattice.展开更多
Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliab...Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube samplin...For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.展开更多
This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accur...This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.展开更多
The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this a...The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.展开更多
For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of ...For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of G,denoted χ_(r)(G),is the smallest integer k such that G admits a(k,r)-coloring.Let Q_(n) be the n-dimensional hypercube.For any integers n and r with n≥2 and 2≤r≤5,we investigated the behavior of χ_(r)(Q_(n)),and determined the exact value of χ_(2)(Q_(n))and χ_(3)(Q_(n))for all positive integers n.展开更多
Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whos...Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.展开更多
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the bla...In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.展开更多
Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and re...Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.展开更多
The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testi...The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testing phase during which processors test each other is discussed. The diagnosabilities of exchanged hypercubes are studied by using the pessimistic one-step diagno- sis strategy under two kinds of diagnosis models: the PMC model and the MM model. The main results presented here are the two proofs that the degree of diagnosability of the EH(s, t) under pessimistic one-step tl/tl fault diagnosis strategy is 2s where I ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the PMC model and that it is also 2s where 1 ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the MM* model.展开更多
Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collabora...Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.展开更多
文摘An autonomous discrete space is proposed consisting of a huge number of four dimensional hypercubic lattices, unified along one of the four axes. The unification is such that the properties of the individual lattice are preserved. All the unifying axes are parallel, and the other axes have indeterminate mutual relations. The two kinds of axes are non-interchangeable resembling time and space of reality. The unification constitutes a framework without spatial properties. In case the axes with indeterminate relations are present at regular intervals in the time and the space, a Euclidean-like metric and goniometry can be obtained. In thus defined space-like structure, differences in speed and relativistic relations are only possible within regions of space enclosed by aberrations of the structure.
文摘A multidirectional discrete space consists of numerous hypercubic lattices each of which contains one of the spatial directions. In such a space, several groups of lattices can be distinguished with a certain property. Each group is determined by the number of lattices it comprises, forming the characterizing numbers of the space. Using the specific properties of a multidirectional discrete space, it is shown that some of the characterizing numbers can be associated with a physical constant. The fine structure constant appears to be equal to the ratio of two of these numbers, which offers the possibility of calculating the series of smallest numerical values of these numbers. With these values, a reasoned estimate can be made of the upper limit of the smallest distance of the discrete space of approximately the Planck length.
文摘The possibility of granulated discrete fields is considered in which there are at least three distinct base granules. Because of the limited size of the granules, the motion of an endlessly extended particle field must to be split into an inner and an outer part. The inner part moves gradually in a point particle-like fashion, the outer is moving step-wise in a wave-like manner. This dual behaviour is reminiscent of the particle-wave duality. Field granulation can be caused by deviations of the structure of the lattice at the boundaries of the granule, causing some axes of the granule to be tilted. The granules exhibit relativistic effects, inter alia, caused by the universality of the coordination number of the lattice.
基金Sichuan Science and Technology Program under Grant No.2024NSFSC0932the National Natural Science Foundation of China under Grant No.52008047。
文摘Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
基金co-supported by the National Natural Science Foundation of China(Nos.51875014,U2233212 and 51875015)the Natural Science Foundation of Beijing Municipality,China(No.L221008)+1 种基金Science,Technology Innovation 2025 Major Project of Ningbo of China(No.2022Z005)the Tianmushan Laboratory Project,China(No.TK2023-B-001)。
文摘For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.
文摘This paper introduces the Particle SwarmOptimization(PSO)algorithmto enhance the LatinHypercube Sampling(LHS)process.The key objective is to mitigate the issues of lengthy computation times and low computational accuracy typically encountered when applying Monte Carlo Simulation(MCS)to LHS for probabilistic trend calculations.The PSOmethod optimizes sample distribution,enhances global search capabilities,and significantly boosts computational efficiency.To validate its effectiveness,the proposed method was applied to IEEE34 and IEEE-118 node systems containing wind power.The performance was then compared with Latin Hypercubic Important Sampling(LHIS),which integrates significant sampling with theMonte Carlomethod.The comparison results indicate that the PSO-enhanced method significantly improves the uniformity and representativeness of the sampling.This enhancement leads to a reduction in data errors and an improvement in both computational accuracy and convergence speed.
文摘The ability to predict the anti-interference communications performance of unmanned aerial vehicle(UAV)data links is critical for intelligent route planning of UAVs in real combat scenarios.Previous research in this area has encountered several limitations:Classifiers exhibit low training efficiency,their precision is notably reduced when dealing with imbalanced samples,and they cannot be applied to the condition where the UAV’s flight altitude and the antenna bearing vary.This paper proposes the sequential Latin hypercube sampling(SLHS)-support vector machine(SVM)-AdaBoost algorithm,which enhances the training efficiency of the base classifier and circumvents local optima during the search process through SLHS optimization.Additionally,it mitigates the bottleneck of sample imbalance by adjusting the sample weight distribution using the AdaBoost algorithm.Through comparison,the modeling efficiency,prediction accuracy on the test set,and macro-averaged values of precision,recall,and F1-score for SLHS-SVM-AdaBoost are improved by 22.7%,5.7%,36.0%,25.0%,and 34.2%,respectively,compared with Grid-SVM.Additionally,these values are improved by 22.2%,2.1%,11.3%,2.8%,and 7.4%,respectively,compared with particle swarm optimization(PSO)-SVM-AdaBoost.Combining Latin hypercube sampling with the SLHS-SVM-AdaBoost algorithm,the classification prediction model of anti-interference performance of UAV data links,which took factors like three-dimensional position of UAV and antenna bearing into consideration,is established and used to assess the safety of the classical flying path and optimize the flying route.It was found that the risk of loss of communications could not be completely avoided by adjusting the flying altitude based on the classical path,whereas intelligent path planning based on the classification prediction model of anti-interference performance can realize complete avoidance of being interfered meanwhile reducing the route length by at least 2.3%,thus benefiting both safety and operation efficiency.
基金supported by Natural Science Foundation of Xinjiang Uygur Autonomous Region of China“Spanning connectivity and supereulerian properties of graphs”(2022D01C410).
文摘For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of G,denoted χ_(r)(G),is the smallest integer k such that G admits a(k,r)-coloring.Let Q_(n) be the n-dimensional hypercube.For any integers n and r with n≥2 and 2≤r≤5,we investigated the behavior of χ_(r)(Q_(n)),and determined the exact value of χ_(2)(Q_(n))and χ_(3)(Q_(n))for all positive integers n.
基金Supported by the National Natural Science Foundation of China under Grant No.69933020 (国家自然科学基金) the Natural Science Foundation of Shandong Province of China under Grant No.Y2002G03 (山东省自然科学基金)
文摘Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.
基金Supported by Jiangsu Provincical Natural Science Foundation of China(Grant No.BK20140554)National Natural Science Foundation of China(Grant No.51409123)+2 种基金China Postdoctoral Science Foundation(Grant No.2015T80507)Innovation Project for Postgraduates of Jiangsu Province,China(Grant No.KYLX15_1066)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
文摘Constructing metamodel with global high-fidelity in design space is significant in engineering design. In this paper, a double-stage metamodel (DSM) which integrates advantages of both interpolation metamodel and regression metamodel is constructed. It takes regression model as the first stage to fit overall distribution of the original model, and then interpolation model of regression model approximation error is used as the second stage to improve accuracy. Under the same conditions and with the same samples, DSM expresses higher fidelity and represents physical characteristics of original model better. Besides, in order to validate DSM characteristics, three examples including Ackley function, airfoil aerodynamic analysis and wing aerodynamic analysis are investigated, In the end, airfoil and wing aerodynamic design optimizations using genetic algorithm are presented to verify the engineering applicability of DSM.
基金supported by the National Natural Science Fundation of China(61363002)
文摘The exchanged hypercube EH(s, t) (where s ≥ 1 and t ≥ 1) is obtained by systematically reducing links from a regular hypercube Q,+t+l. One-step diagnosis of exchanged hypercubes which involves only one testing phase during which processors test each other is discussed. The diagnosabilities of exchanged hypercubes are studied by using the pessimistic one-step diagno- sis strategy under two kinds of diagnosis models: the PMC model and the MM model. The main results presented here are the two proofs that the degree of diagnosability of the EH(s, t) under pessimistic one-step tl/tl fault diagnosis strategy is 2s where I ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the PMC model and that it is also 2s where 1 ≤ s ≤ t (respectively, 2t, where 1 ≤ t ≤ s) based on the MM* model.
文摘Improving the efficiency of ship optimization is crucial for modem ship design. Compared with traditional methods, multidisciplinary design optimization (MDO) is a more promising approach. For this reason, Collaborative Optimization (CO) is discussed and analyzed in this paper. As one of the most frequently applied MDO methods, CO promotes autonomy of disciplines while providing a coordinating mechanism guaranteeing progress toward an optimum and maintaining interdisciplinary compatibility. However, there are some difficulties in applying the conventional CO method, such as difficulties in choosing an initial point and tremendous computational requirements. For the purpose of overcoming these problems, optimal Latin hypercube design and Radial basis function network were applied to CO. Optimal Latin hypercube design is a modified Latin Hypercube design. Radial basis function network approximates the optimization model, and is updated during the optimization process to improve accuracy. It is shown by examples that the computing efficiency and robustness of this CO method are higher than with the conventional CO method.