Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whos...Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.展开更多
The generalized conditional fault-tolerant embedding is investigated, in which the n-dimensional folded hypercube networks (denoted by FQn) acts as the host graph, and the longest fault-free cycle represents the gue...The generalized conditional fault-tolerant embedding is investigated, in which the n-dimensional folded hypercube networks (denoted by FQn) acts as the host graph, and the longest fault-free cycle represents the vip graph. Under the conditions looser than that of previous works, it is shown that FQn has a cycle with length at least 2n -21F, I when the number of faulty vertices and non-critical edges is at most 2n-4; where |Fv| is the number of faulty vertices. It provides further theoretical evidence for the fact that FQn has excellent node-fault-tolerance and edge-fault-tolerance when used as a topology of large scale computer networks.展开更多
The classical hypercube structure is a popular topological architecture in parallel computing environments and a large number of variations based on the hypercube were posed in the past three decades. Reliability eval...The classical hypercube structure is a popular topological architecture in parallel computing environments and a large number of variations based on the hypercube were posed in the past three decades. Reliability evaluation of systems is important to the design and maintenance of multiprocessor systems. The h-extra edge-connectivity of graph G(V, E) is a kind of measure for the reliability of interconnection systems, which is defined as the minimum cardinality of a subset of edge set, if any, whose deletion disconnects G and such that every re- maining component has at least h vertices. This paper shows that the h-extra edge-connectivity 2n-1 2n-1 of the hypercube Qn is a constant 2n-1 for 2n-1/3≤ h2n-1, and n ≥ 4, which extends the result of [Bounding the size of the subgraph induced by m vertices and extra edge-connectivity of hypercubes, Discrete Applied Mathematics, 2013, 161(16): 2753-2757].展开更多
System-level fault identification is a key subject for maintaining the reliability of multiprocessor interconnected systems. This task requires fast and accurate inferences based on big volume of data, and the problem...System-level fault identification is a key subject for maintaining the reliability of multiprocessor interconnected systems. This task requires fast and accurate inferences based on big volume of data, and the problem of fault identification in an unstructured graph has been proved to be NP-hard (non-deterministic polynomial-time hard). In this paper, we adopt the PMC diagnostic model (first proposed by Preparata, Metze, and Chien) as the foundation of point-to-point probing technology, and a system contains only restricted-faults if every of its fault-free units has at least one fault-free neighbor. Under this condition we propose an efficient method of identifying restricted-faults in the folded hypercube, which is a promising alternative to the popular hypercube topology.展开更多
We study embeddings of the n-dimensional hypercube into the circuit with 2nvertices.We prove that the circular wirelength attains a minimum by gray coding;that was called the CT conjecture by Chavez and Trapp(Discrete...We study embeddings of the n-dimensional hypercube into the circuit with 2nvertices.We prove that the circular wirelength attains a minimum by gray coding;that was called the CT conjecture by Chavez and Trapp(Discrete Applied Mathematics,1998).This problem had claimed to be settled by Ching-Jung Guu in her doctoral dissertation“The circular wirelength problem for hypercubes”(University of California,Riverside,1997).Many argue there are gaps in her proof.We eliminate the gaps in her dissertation.展开更多
In this paper, we present an algorithm for embedding an m-sequential k-ary tree into its optimal hypercube with dilation at most 2 and prove its correctness.
For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of ...For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of G,denoted χ_(r)(G),is the smallest integer k such that G admits a(k,r)-coloring.Let Q_(n) be the n-dimensional hypercube.For any integers n and r with n≥2 and 2≤r≤5,we investigated the behavior of χ_(r)(Q_(n)),and determined the exact value of χ_(2)(Q_(n))and χ_(3)(Q_(n))for all positive integers n.展开更多
The diagnosability of a multiprocessor system or an interconnection network is an important research topic. The system and an interconnection network have an underlying topology, which is usually presented by a graph....The diagnosability of a multiprocessor system or an interconnection network is an important research topic. The system and an interconnection network have an underlying topology, which is usually presented by a graph. In this paper, we show proof for the g-good-neighbor diagnosability of the exchanged hypercube EH (s,t) under the PMC model and MM* model.展开更多
Recent years have witnessed an increasing interest in interval-valued data analysis. As one of the core topics, linear regression attracts particular attention. It attempts to model the relationship between one or mor...Recent years have witnessed an increasing interest in interval-valued data analysis. As one of the core topics, linear regression attracts particular attention. It attempts to model the relationship between one or more explanatory variables and a response variable by fitting a linear equation to the interval-valued observations. Despite of the well-known methods such as CM, CRM and CCRM proposed in the literature, further study is still needed to build a regression model that can capture the complete information in interval-valued observations. To this end, in this paper, we propose the novel Complete Information Method (CIM) for linear regression modeling. By dividing hypercubes into informative grid data, CIM defines the inner product of interval-valued variables, and transforms the regression modeling into the computation of some inner products. Experiments on both the synthetic and real-world data sets demonstrate the merits of CIM in modeling interval-valued data, and avoiding the mathematical incoherence introduced by CM and CRM.展开更多
Based upon hypercube multiprocessor systems,this paper analyses in detail the communication performance under the background of the greedy multicast algorithm GMA.The mean delay time of a mes- sage both at a node and ...Based upon hypercube multiprocessor systems,this paper analyses in detail the communication performance under the background of the greedy multicast algorithm GMA.The mean delay time of a mes- sage both at a node and in the system under multicasting is derived.For the sake of contrast,the delay of multicast message is compared with that of multiple one-to-one messages.展开更多
A fault-tolerant and heuristic routing algorithm for faulty hypercube sys-tems is described. To improve the efficiency, the algorithm adopts a heuristic backtracking strategy and each node has an array to record its a...A fault-tolerant and heuristic routing algorithm for faulty hypercube sys-tems is described. To improve the efficiency, the algorithm adopts a heuristic backtracking strategy and each node has an array to record its all neighbors'faulty link information to avoid unnecessary searching for the known faulty links. Furthermore, the faulty link information is dynamically accumulated and the technique of heuristically searching for optimal link is used. The algo rithm routes messages through the minimum feasible path between the sender and receiver if at Ieast one such path ekists, and ta.kes the optimal path with higher probability when faulty links exist in the faulty hypercube.展开更多
Generalized hypercubes (denoted by Q(d1,d2,... ,dn)) is an important network topology for parallel processing computer systems. Some methods of forming big cycle from small cycles and links have been developed. Ba...Generalized hypercubes (denoted by Q(d1,d2,... ,dn)) is an important network topology for parallel processing computer systems. Some methods of forming big cycle from small cycles and links have been developed. Basing on which, we has proved that in generalized hypercubes, every edge can be contained on a cycle of every length from 3 to IV(G)I inclusive and all kinds of length cycles have been constructed. The edgepanciclieity and node-pancilicity of generalized hypercubes can be applied in the topology design of computer networks to improve the network performance.展开更多
Diagnosability of a multiprocessor system is an important measure of the reliability of interconnection networks.System-level diagnosis is a primary strategy to identify the faulty processors in a multiprocessor syste...Diagnosability of a multiprocessor system is an important measure of the reliability of interconnection networks.System-level diagnosis is a primary strategy to identify the faulty processors in a multiprocessor system.Based on a sound assumption proposed by Zhu et al.recently,we proposed a new diagnosability named non-inclusion diagnosability and showed that the non-inclusion diagnosability t_(N)(Q_(n))of the hypercube under the PMC model is 2n-2.That is,assume that if two vertex sets F_(1) and F_(2) are both consistent with a syndrome and F_(1)C F_(2),then F_(2) is not the faulty set which we are looking for;the faulty set F is 1-step diagnosable if|F|≤2n-2 in Q_(n) under the PMC model.展开更多
Let FFv be the set of faulty nodes in an n-dimensional folded hypercube FQn with |FFv| ≤ n - 1 and all faulty vertices are not adjacent to the same vertex. In this paper, we show that if n ≥ 4, then every edge of ...Let FFv be the set of faulty nodes in an n-dimensional folded hypercube FQn with |FFv| ≤ n - 1 and all faulty vertices are not adjacent to the same vertex. In this paper, we show that if n ≥ 4, then every edge of FQn - FFv lies on a fault-free cycle of every even length from 6 to 2n - 2|FFv|.展开更多
For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube samplin...For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.展开更多
With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Spar...With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Sparrow Search Algorithm(SSA)is proposed for the seismic source parameter inversion problem.By replacing the original population generation in the improved algorithm with Latin hypercubic sampling,the Sparrow Search Algorithm reduces the repetition of samples in the population initialization.Subsequently,the algorithm introduces adaptive weights in the discoverer generation phase of the sparrow algorithm and combines the Levy flight strategy to make the algorithm more comprehensive and improve the search accuracy during the whole iteration process.Therefore,the improved Latin hypercube-based sparrow search algorithm(ILHSSA)has better advantages in terms of iterative convergence speed and stability.In order to verify the performance of ILHSSA,the basic genetic algorithm(GA)and sparrow search algorithm(SSA)are examined and compared with ILHSSA by simulated earthquakes of two different earthquake types.The simulation experiments show that the improved algorithm ILHSSA outperforms SSA in accuracy and stability.Compared with the GA algorithm,ILHSSA can achieve the same inversion accuracy as GA,and it even surpasses GA in inversion speed and the inversion results of some parameters,demonstrating better stability.Finally,the improved algorithm is used for the 2017 Bodrum-Cos earthquake and the 2016 Amatrice earthquake in Italy.The inversion results all reflect the practicality and reliability of the improved algorithm.展开更多
Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliab...Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
文摘Given a graph G and a non-negative integer h, the h-restricted connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, in which at least h neighbors of any vertex is not included, if any, whose deletion disconnects G and every remaining component has the minimum degree of vertex at least h; and the h-extra connectivity κh(G) of G is the minimum cardinality of a set of vertices of G, if any, whose deletion disconnects G and every remaining component has order more than h. This paper shows that for the hypercube Qn and the folded hypercube FQn, κ1(Qn)=κ(1)(Qn)=2n-2 for n≥3, κ2(Qn)=3n-5 for n≥4, κ1(FQn)=κ(1)(FQn)=2n for n≥4 and κ(2)(FQn)=4n-4 for n≥8.
基金Supported by the National Natural Science Foundation of China(11071022)the Key Project of Hubei Department of Education(D20092207)
文摘The generalized conditional fault-tolerant embedding is investigated, in which the n-dimensional folded hypercube networks (denoted by FQn) acts as the host graph, and the longest fault-free cycle represents the vip graph. Under the conditions looser than that of previous works, it is shown that FQn has a cycle with length at least 2n -21F, I when the number of faulty vertices and non-critical edges is at most 2n-4; where |Fv| is the number of faulty vertices. It provides further theoretical evidence for the fact that FQn has excellent node-fault-tolerance and edge-fault-tolerance when used as a topology of large scale computer networks.
基金Supported by the National Natural Science Foundation of China(11171283,11471273,11461038,11301440)Natural Sciences Foundation of Shanxi Province(2014021010-2)
文摘The classical hypercube structure is a popular topological architecture in parallel computing environments and a large number of variations based on the hypercube were posed in the past three decades. Reliability evaluation of systems is important to the design and maintenance of multiprocessor systems. The h-extra edge-connectivity of graph G(V, E) is a kind of measure for the reliability of interconnection systems, which is defined as the minimum cardinality of a subset of edge set, if any, whose deletion disconnects G and such that every re- maining component has at least h vertices. This paper shows that the h-extra edge-connectivity 2n-1 2n-1 of the hypercube Qn is a constant 2n-1 for 2n-1/3≤ h2n-1, and n ≥ 4, which extends the result of [Bounding the size of the subgraph induced by m vertices and extra edge-connectivity of hypercubes, Discrete Applied Mathematics, 2013, 161(16): 2753-2757].
基金supported in part by the NSC under Grand No.NSC102-2221-E-468-018
文摘System-level fault identification is a key subject for maintaining the reliability of multiprocessor interconnected systems. This task requires fast and accurate inferences based on big volume of data, and the problem of fault identification in an unstructured graph has been proved to be NP-hard (non-deterministic polynomial-time hard). In this paper, we adopt the PMC diagnostic model (first proposed by Preparata, Metze, and Chien) as the foundation of point-to-point probing technology, and a system contains only restricted-faults if every of its fault-free units has at least one fault-free neighbor. Under this condition we propose an efficient method of identifying restricted-faults in the folded hypercube, which is a promising alternative to the popular hypercube topology.
文摘We study embeddings of the n-dimensional hypercube into the circuit with 2nvertices.We prove that the circular wirelength attains a minimum by gray coding;that was called the CT conjecture by Chavez and Trapp(Discrete Applied Mathematics,1998).This problem had claimed to be settled by Ching-Jung Guu in her doctoral dissertation“The circular wirelength problem for hypercubes”(University of California,Riverside,1997).Many argue there are gaps in her proof.We eliminate the gaps in her dissertation.
文摘In this paper, we present an algorithm for embedding an m-sequential k-ary tree into its optimal hypercube with dilation at most 2 and prove its correctness.
基金supported by Natural Science Foundation of Xinjiang Uygur Autonomous Region of China“Spanning connectivity and supereulerian properties of graphs”(2022D01C410).
文摘For positive integers k and r,a(k,r)-coloring of graph G is a proper vertex k-coloring of G such that the neighbors of any vertex v∈V(G)receive at least min{d_(G)(v),r}different colors.The r-hued chromatic number of G,denoted χ_(r)(G),is the smallest integer k such that G admits a(k,r)-coloring.Let Q_(n) be the n-dimensional hypercube.For any integers n and r with n≥2 and 2≤r≤5,we investigated the behavior of χ_(r)(Q_(n)),and determined the exact value of χ_(2)(Q_(n))and χ_(3)(Q_(n))for all positive integers n.
文摘The diagnosability of a multiprocessor system or an interconnection network is an important research topic. The system and an interconnection network have an underlying topology, which is usually presented by a graph. In this paper, we show proof for the g-good-neighbor diagnosability of the exchanged hypercube EH (s,t) under the PMC model and MM* model.
基金supported in part by the National Natural Science Foundation of China(NSFC) under Grants 71031001,70771004,70901002 and 71171007the Foundation for the Author of National Excellent Doctoral Dissertation of PR China under Grant 201189the Program for New Century Excellent Talents in University under Grant NCET-1 1-0778
文摘Recent years have witnessed an increasing interest in interval-valued data analysis. As one of the core topics, linear regression attracts particular attention. It attempts to model the relationship between one or more explanatory variables and a response variable by fitting a linear equation to the interval-valued observations. Despite of the well-known methods such as CM, CRM and CCRM proposed in the literature, further study is still needed to build a regression model that can capture the complete information in interval-valued observations. To this end, in this paper, we propose the novel Complete Information Method (CIM) for linear regression modeling. By dividing hypercubes into informative grid data, CIM defines the inner product of interval-valued variables, and transforms the regression modeling into the computation of some inner products. Experiments on both the synthetic and real-world data sets demonstrate the merits of CIM in modeling interval-valued data, and avoiding the mathematical incoherence introduced by CM and CRM.
文摘Based upon hypercube multiprocessor systems,this paper analyses in detail the communication performance under the background of the greedy multicast algorithm GMA.The mean delay time of a mes- sage both at a node and in the system under multicasting is derived.For the sake of contrast,the delay of multicast message is compared with that of multiple one-to-one messages.
文摘A fault-tolerant and heuristic routing algorithm for faulty hypercube sys-tems is described. To improve the efficiency, the algorithm adopts a heuristic backtracking strategy and each node has an array to record its all neighbors'faulty link information to avoid unnecessary searching for the known faulty links. Furthermore, the faulty link information is dynamically accumulated and the technique of heuristically searching for optimal link is used. The algo rithm routes messages through the minimum feasible path between the sender and receiver if at Ieast one such path ekists, and ta.kes the optimal path with higher probability when faulty links exist in the faulty hypercube.
基金This project is supported by National Natural Science Foundation of China (10671081)
文摘Generalized hypercubes (denoted by Q(d1,d2,... ,dn)) is an important network topology for parallel processing computer systems. Some methods of forming big cycle from small cycles and links have been developed. Basing on which, we has proved that in generalized hypercubes, every edge can be contained on a cycle of every length from 3 to IV(G)I inclusive and all kinds of length cycles have been constructed. The edgepanciclieity and node-pancilicity of generalized hypercubes can be applied in the topology design of computer networks to improve the network performance.
基金the National Natural Science Foundation of China(Nos.61672025,60974082,61179040 and 61075117)Shandong Provincial Natural Science Foundation(No.ZR2021MF012).
文摘Diagnosability of a multiprocessor system is an important measure of the reliability of interconnection networks.System-level diagnosis is a primary strategy to identify the faulty processors in a multiprocessor system.Based on a sound assumption proposed by Zhu et al.recently,we proposed a new diagnosability named non-inclusion diagnosability and showed that the non-inclusion diagnosability t_(N)(Q_(n))of the hypercube under the PMC model is 2n-2.That is,assume that if two vertex sets F_(1) and F_(2) are both consistent with a syndrome and F_(1)C F_(2),then F_(2) is not the faulty set which we are looking for;the faulty set F is 1-step diagnosable if|F|≤2n-2 in Q_(n) under the PMC model.
基金supported by NSFC(11371162)and NSFC(11171129)HuBei(T201103)
文摘Let FFv be the set of faulty nodes in an n-dimensional folded hypercube FQn with |FFv| ≤ n - 1 and all faulty vertices are not adjacent to the same vertex. In this paper, we show that if n ≥ 4, then every edge of FQn - FFv lies on a fault-free cycle of every even length from 6 to 2n - 2|FFv|.
基金co-supported by the National Natural Science Foundation of China(Nos.51875014,U2233212 and 51875015)the Natural Science Foundation of Beijing Municipality,China(No.L221008)+1 种基金Science,Technology Innovation 2025 Major Project of Ningbo of China(No.2022Z005)the Tianmushan Laboratory Project,China(No.TK2023-B-001)。
文摘For uncertainty quantification of complex models with high-dimensional,nonlinear,multi-component coupling like digital twins,traditional statistical sampling methods,such as random sampling and Latin hypercube sampling,require a large number of samples,which entails huge computational costs.Therefore,how to construct a small-size sample space has been a hot issue of interest for researchers.To this end,this paper proposes a sequential search-based Latin hypercube sampling scheme to generate efficient and accurate samples for uncertainty quantification.First,the sampling range of the samples is formed by carving the polymorphic uncertainty based on theoretical analysis.Then,the optimal Latin hypercube design is selected using the Latin hypercube sampling method combined with the"space filling"criterion.Finally,the sample selection function is established,and the next most informative sample is optimally selected to obtain the sequential test sample.Compared with the classical sampling method,the generated samples can retain more information on the basis of sparsity.A series of numerical experiments are conducted to demonstrate the superiority of the proposed sequential search-based Latin hypercube sampling scheme,which is a way to provide reliable uncertainty quantification results with small sample sizes.
基金funded by the National Natural Science Foundation of China(42174011).
文摘With the continuous improvement of the accuracy of geodetic deformation data,the inversion of seismic source parameters puts forward a higher demand for nonlinear inversion algorithms.In this research,an improved Sparrow Search Algorithm(SSA)is proposed for the seismic source parameter inversion problem.By replacing the original population generation in the improved algorithm with Latin hypercubic sampling,the Sparrow Search Algorithm reduces the repetition of samples in the population initialization.Subsequently,the algorithm introduces adaptive weights in the discoverer generation phase of the sparrow algorithm and combines the Levy flight strategy to make the algorithm more comprehensive and improve the search accuracy during the whole iteration process.Therefore,the improved Latin hypercube-based sparrow search algorithm(ILHSSA)has better advantages in terms of iterative convergence speed and stability.In order to verify the performance of ILHSSA,the basic genetic algorithm(GA)and sparrow search algorithm(SSA)are examined and compared with ILHSSA by simulated earthquakes of two different earthquake types.The simulation experiments show that the improved algorithm ILHSSA outperforms SSA in accuracy and stability.Compared with the GA algorithm,ILHSSA can achieve the same inversion accuracy as GA,and it even surpasses GA in inversion speed and the inversion results of some parameters,demonstrating better stability.Finally,the improved algorithm is used for the 2017 Bodrum-Cos earthquake and the 2016 Amatrice earthquake in Italy.The inversion results all reflect the practicality and reliability of the improved algorithm.
基金Sichuan Science and Technology Program under Grant No.2024NSFSC0932the National Natural Science Foundation of China under Grant No.52008047。
文摘Probabilistic assessment of seismic performance(SPPA)is a crucial aspect of evaluating the seismic behavior of structures.For complex bridges with inherent uncertainties,conducting precise and efficient seismic reliability analysis remains a significant challenge.To address this issue,the current study introduces a sample-unequal weight fractional moment assessment method,which is based on an improved correlation-reduced Latin hypercube sampling(ICLHS)technique.This method integrates the benefits of important sampling techniques with interpolator quadrature formulas to enhance the accuracy of estimating the extreme value distribution(EVD)for the seismic response of complex nonlinear structures subjected to non-stationary ground motions.Additionally,the core theoretical approaches employed in seismic reliability analysis(SRA)are elaborated,such as dimension reduction for simulating non-stationary random ground motions and a fractional-maximum entropy single-loop solution strategy.The effectiveness of this proposed method is validated through a three-story nonlinear shear frame structure.Furthermore,a comprehensive reliability analysis of a real-world long-span,single-pylon suspension bridge is conducted using the developed theoretical framework within the OpenSees platform,leading to key insights and conclusions.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
基金Supported by the National Natural Science Foundation of China under Grant No.69933020 (国家自然科学基金) the Natural Science Foundation of Shandong Province of China under Grant No.Y2002G03 (山东省自然科学基金)