Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
Inter-domain path computing is one big issue in multi-domain networks. The Hierarchical Path Computing Element (H-PCE) is a semi-central architecture for computing inter-domain path. To facilitate H-PCE in inter-domai...Inter-domain path computing is one big issue in multi-domain networks. The Hierarchical Path Computing Element (H-PCE) is a semi-central architecture for computing inter-domain path. To facilitate H-PCE in inter-domain path computing, this paper proposed a topology aggregation scheme to abstract the edge nodes and their connected inter-domain link as one vertex to achieve more optimal paths and confidentiality guarantee. The effectiveness of the scheme has been demonstrated on solving wavelength routing in multi-domain Wavelength Division Multiplexing (WDM) network via simulation. Simulation results show that this scheme reduces at least 10% inter-domain blocking probability, compared with the traditional Domain-to-the-Node (DtN) scheme.展开更多
The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of ...The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.展开更多
This paper presents a hierarchical dynamic routing protocol (HDRP) based on the discrete dynamic programming principle. The proposed protocol can adapt to the dynamic and large computer networks (DLCN) with clustering...This paper presents a hierarchical dynamic routing protocol (HDRP) based on the discrete dynamic programming principle. The proposed protocol can adapt to the dynamic and large computer networks (DLCN) with clustering topology. The procedures for realizing routing update and decision are presented in this paper. The proof of correctness and complexity analysis of the protocol are also made. The performance measures of the HDRP including throughput and average message delay are evaluated by using of simulation. The study shows that the HDRP provides a new available approach to the routing decision for DLCN or high speed networks with clustering topology.展开更多
Using powerful concepts and tools borrowed from the seminal arsenal connecting physics fundamentals with esoteric set theoretical operations developed in recent years by Alexandria E-infinity theoretician M. S. El Nas...Using powerful concepts and tools borrowed from the seminal arsenal connecting physics fundamentals with esoteric set theoretical operations developed in recent years by Alexandria E-infinity theoretician M. S. El Naschie, this paper explores the deep implications of some of the dualities Dr El Naschie has identified and analyzed in his exposes, connecting them with our own Xonic Quantum Physics (XQP) which places dynamical action rather than spacetime and energy at the core of the System of the World.展开更多
We utilize two different theories to prove that cosmic dark energy density is the complimentary Legendre transformation of ordinary energy and vice versa as given by E(dark) = mc2 (21/22) and E(ordinary) = mc2/22. The...We utilize two different theories to prove that cosmic dark energy density is the complimentary Legendre transformation of ordinary energy and vice versa as given by E(dark) = mc2 (21/22) and E(ordinary) = mc2/22. The first theory used is based on G ‘t Hooft’s remarkably simple renormalization procedure in which a neat mathematical maneuver is introduced via the dimensionality of our four dimensional spacetime. Thus, ‘t Hooft used instead of D = 4 and then took at the end of an intricate and subtle computation the limit to obtain the result while avoiding various problems including the pole singularity at D = 4. Here and in contradistinction to the classical form of dimensional and renormalization we set and do not take the limit where and is the theoretically and experimentally well established Hardy’s generic quantum entanglement. At the end we see that the dark energy density is simply the ratio of and the smooth disentangled D = 4, i.e. (dark) = (4 -k)/4 = 3.8196011/4 = 0.9549150275. Consequently where we have ignored the fine structure details by rounding 21 + k to 21 and 22 + k to 22 in a manner not that much different from of the original form of dimensional regularization theory. The result is subsequently validated by another equally ingenious approach due mainly to E. Witten and his school of topological quantum field theory. We notice that in that theory the local degrees of freedom are zero. Therefore, we are dealing essentially with pure gravity where are the degrees of freedom and is the corresponding dimension. The results and the conclusion of the paper are summarized in Figure 1-3, Table 1 and Flow Chart 1.展开更多
Solid-state quantum computation station belongs to the group 2 of manipulation of quantum state in the Synergetic Extreme Condition User Facility. Here we will first outline the research background, aspects, and objec...Solid-state quantum computation station belongs to the group 2 of manipulation of quantum state in the Synergetic Extreme Condition User Facility. Here we will first outline the research background, aspects, and objectives of the station, followed by a discussion of the recent scientific as well as technological progress in this field based on similar experimental facilities to be constructed in the station. Finally, a brief summary and research perspective will be presented.展开更多
Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be nam...Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be named as anyons satisfying a generalized statistics.These anyons can be related with topological phases of matter.Interestingly,anyons can be used to encode qubits to perform quantum computations with specific advantages in which the corresponding qubits are naturally fault tolerant due to topological protection.[1,2]This approach is called topological quantum computation.However,its implementation based on natural systems still seems far from realization.展开更多
We studied anomalous Josephson effect(AJE) in Josephson trijunctions fabricated on Bi_(2)Se_(3), and found that the AJE in T-shaped trijunctions significantly alters the Majorana phase diagram of the trijunctions, whe...We studied anomalous Josephson effect(AJE) in Josephson trijunctions fabricated on Bi_(2)Se_(3), and found that the AJE in T-shaped trijunctions significantly alters the Majorana phase diagram of the trijunctions, when an in-plane magnetic field is applied parallel to two of the three single junctions. Such a phenomenon in topological insulator-based Josephson trijunction provides unambiguous evidence for the existence of AJE in the system, and may provide an additional knob for controlling the Majorana bound states in the Fu–Kane scheme of topological quantum computation.展开更多
Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phe...Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phenomenon of local failure is modeled in fail-safe topology optimization.In this work,we first conduct a comprehensive study to explore the impact of patch size,shape,and distribution on the robustness of fail-safe designs.The findings suggest that larger sizes and finer distribution of material patches can yield more robust fail-safe structures.However,a finer patch distribution can significantly increase computational costs,particularly for 3D structures.To keep computational efforts tractable,an efficient fail-safe topology optimization approach is established based on the framework of multi-resolution topology optimization(MTOP).Within the MTOP framework,the extended finite element method is introduced to establish a decoupling connection between the analysis mesh and the topology description model.Numerical examples demonstrate that the developed methodology is 2 times faster for 2D problems and over 25 times faster for 3D problems than traditional fail-safe topology optimization while maintaining similar levels of robustness.展开更多
We introduce a new algebraic approach dealing with the problem of computing the topology of an arrangement of a finite set of real algebraic plane curves presented implicitly. The main achievement of the presented met...We introduce a new algebraic approach dealing with the problem of computing the topology of an arrangement of a finite set of real algebraic plane curves presented implicitly. The main achievement of the presented method is a complete avoidance of irrational numbers that appear when using the sweeping method in the classical way for solving the problem at hand. Therefore, it is worth mentioning that the efficiency of the proposed method is only assured for low-degree curves.展开更多
This paper aims to solve large-scale and complex isogeometric topology optimization problems that consumesignificant computational resources. A novel isogeometric topology optimization method with a hybrid parallelstr...This paper aims to solve large-scale and complex isogeometric topology optimization problems that consumesignificant computational resources. A novel isogeometric topology optimization method with a hybrid parallelstrategy of CPU/GPU is proposed, while the hybrid parallel strategies for stiffness matrix assembly, equationsolving, sensitivity analysis, and design variable update are discussed in detail. To ensure the high efficiency ofCPU/GPU computing, a workload balancing strategy is presented for optimally distributing the workload betweenCPU and GPU. To illustrate the advantages of the proposedmethod, three benchmark examples are tested to verifythe hybrid parallel strategy in this paper. The results show that the efficiency of the hybrid method is faster thanserial CPU and parallel GPU, while the speedups can be up to two orders of magnitude.展开更多
Heterogeneous strain engineering offers a promising approach for developing high-performance stretchable strain sensors,but the optimal strain distributions remain unexplored.Herein,we derive the optimal strain topolo...Heterogeneous strain engineering offers a promising approach for developing high-performance stretchable strain sensors,but the optimal strain distributions remain unexplored.Herein,we derive the optimal strain topology for achieving maximum sensitivities using Monte Carlo simulations,and identify the key sensitivity-regulating parameters,thus establishing a general computational design guideline.Mathematical analysis demonstrates that within the optimal topology,sensitivity is maximized by reducing the strain value of low-strain regions or increasing their area proportion.As proof of concept,patterned graphene strain sensors(PGSSs)featuring parameterized grooves are designed with their small strain values and proportions precisely modulated via finite element analysis.Adjusting these parameters enhances sensitivity by factors of~10.7 and 3.3,with the highest gauge factor reaching 25,600 at 100%strain.Furthermore,the PGSSs can effectively detect human body motions and gauge object dimensions when integrated with robot grippers.The computational framework exhibits applicability across different heterogeneous strain engineering methods.展开更多
Reconfigurable linear optical networks based on Mach-Zehnder interferometer(MZI)offer significant potential in optical information processing,particularly in emerging photonic quantum computing systems.However,device ...Reconfigurable linear optical networks based on Mach-Zehnder interferometer(MZI)offer significant potential in optical information processing,particularly in emerging photonic quantum computing systems.However,device losses and calibration errors accumulate as network complexity grows,posing challenges in performing precise mapping of matrix operations.Existing architectures,such as Diamond and Bokun,introduce MZI redundancy into Reck and Clements architectures to improve reliability,which increases complexity and differential path losses that limit scalability.We propose a compact topology architecture that achieves 100%fidelity by employing a symmetrical MZI to decouple optical loss from power ratio and introducing extra MZIs to enforce uniform loss distributions.This multi-level optimization enables direct monitoring pathways while supporting precise calibration,and it approaches theoretical fidelity in practical deployments with direct implications for scalable and fault-tolerant photonic computing systems.展开更多
With a paper published in the 19 February 2025 issue of Nature[1],Microsoft(Redmond,WA,USA)fanned the flames of its unique vision for quantum computing:a stable,error-resistant qubit based on the Majorana fermion,one ...With a paper published in the 19 February 2025 issue of Nature[1],Microsoft(Redmond,WA,USA)fanned the flames of its unique vision for quantum computing:a stable,error-resistant qubit based on the Majorana fermion,one of the strangest and most elusive particles in physics.The Microsoft Azure Quantum research team’s descriptions of a means to detect the as-yet theoretical particles[1]—called“an entirely new state of matter”by Microsoft’s chief executive officer[2]—and a design for a chip powered by them(Fig.1)[3]have refocused attention on the company’s ambition to build a topological quantum computer.The approach—if it works—could potentially leapfrog every other in the field.展开更多
The medical monitoring system is widely used. In the medical monitoring system, each user only possesses one piece of data logging that participates in statistical computing. Specifically in such a situation, a feasib...The medical monitoring system is widely used. In the medical monitoring system, each user only possesses one piece of data logging that participates in statistical computing. Specifically in such a situation, a feasible solution is to scatter its statistical computing workload to corresponding statistical nodes. Moreover, there are still two problems that should be resolved. One is how the server takes advantage of intermediate results obtained through statistical node aggregation to perform statistical computing. Statistical variable decomposition technique points out the direction for statistical projects. The other problem is how to design an efficient topological structure for statistical computing. In this paper, tree topology was adopted to implement data aggregation to improve aggregation efficiency. And two experiments were done for time consumption of statistical computing which focuses on encrypted data aggregation and encrypted data computing. The first experiment indicates that encrypted data aggregation efficiency of the scheme proposed in this paper is better than that of Drosatos' scheme, and the second indicates that improving computing power of the server or computational efficiency of the functional encryption scheme can shorten the computation time.展开更多
A methodology for topology optimization based on element independent nodal density(EIND) is developed.Nodal densities are implemented as the design variables and interpolated onto element space to determine the densit...A methodology for topology optimization based on element independent nodal density(EIND) is developed.Nodal densities are implemented as the design variables and interpolated onto element space to determine the density of any point with Shepard interpolation function.The influence of the diameter of interpolation is discussed which shows good robustness.The new approach is demonstrated on the minimum volume problem subjected to a displacement constraint.The rational approximation for material properties(RAMP) method and a dual programming optimization algorithm are used to penalize the intermediate density point to achieve nearly 0-1 solutions.Solutions are shown to meet stability,mesh dependence or non-checkerboard patterns of topology optimization without additional constraints.Finally,the computational efficiency is greatly improved by multithread parallel computing with OpenMP.展开更多
Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. ...Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. In GrCC, it is a simplical complex, called the nerve of the covering in combinatorial topology. For LNS, the structure has no known description. (3) The approximation space of RST is a topological space generated by a partition, called a clopen space. For LNS, it is a generalized/pretopological space which is more general than topological space. For GrCC,there are two possibilities. One is a special case of LNS,which is the topological space generated by the covering. There is another topological space, the topology generated by the finite intersections of the members of a covering The first one treats covering as a base, the second one as a subbase. (4) Knowledge representations in RST are symbol-valued systems. In GrCC, they are expression-valued systems. In LNS, they are multivalued system; reported in 1998 . (5) RST and GRCC representation theories are complete in the sense that granular models can be recaptured fully from the knowledge representations.展开更多
This paper presents partially asynchronous parallel simulation of continuous-system (PAPSoCS) and some approaches to the issues of its implementation on a multicomputer system. To guarantee the simulation results cor...This paper presents partially asynchronous parallel simulation of continuous-system (PAPSoCS) and some approaches to the issues of its implementation on a multicomputer system. To guarantee the simulation results correct and speedup the simulation, the scheme for efficient PAPSoCS is proposed and the virtual topology star is constructed to match the path of message passing for solving algorithm-architecture adequation problem. Under the circumstances that messages frequently passed inter-processor are much shorter, typically within several 4 bytes, asynchronous communication mode is employed to reduce the communication ratio. Experiment results show that asynchronous parallel simulation has much higher efficiency than its synchronous counterpart.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
基金Acknowledgements This work was supported by Chang Jiang Scholars Program of the Ministry of Education of China, National Science Fund for Distinguished Young Scholars under Grant No.60725104 the National Basic Research Program of China under Grant No. 2007CB310706+2 种基金 the National Natural Science Foundation of China under Ca'ant No. 60932002, No. 60932005, No. 61071101 the Hi-Tech Research and Development Program of China under Grant No. 2009AA01Z254, No. 2009AA01Z215 NCEF Program of MoE of China, and Sichuan Youth Science and Technology Foundation under Crant No. 09ZQ026-032.
文摘Inter-domain path computing is one big issue in multi-domain networks. The Hierarchical Path Computing Element (H-PCE) is a semi-central architecture for computing inter-domain path. To facilitate H-PCE in inter-domain path computing, this paper proposed a topology aggregation scheme to abstract the edge nodes and their connected inter-domain link as one vertex to achieve more optimal paths and confidentiality guarantee. The effectiveness of the scheme has been demonstrated on solving wavelength routing in multi-domain Wavelength Division Multiplexing (WDM) network via simulation. Simulation results show that this scheme reduces at least 10% inter-domain blocking probability, compared with the traditional Domain-to-the-Node (DtN) scheme.
基金supported by the Australian Research Council(Grant No.IC190100020)the Australian Research Council Indus〓〓try Fellowship(Grant No.IE230100435)the National Natural Science Foundation of China(Grant Nos.12032014 and T2488101)。
文摘The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.
文摘This paper presents a hierarchical dynamic routing protocol (HDRP) based on the discrete dynamic programming principle. The proposed protocol can adapt to the dynamic and large computer networks (DLCN) with clustering topology. The procedures for realizing routing update and decision are presented in this paper. The proof of correctness and complexity analysis of the protocol are also made. The performance measures of the HDRP including throughput and average message delay are evaluated by using of simulation. The study shows that the HDRP provides a new available approach to the routing decision for DLCN or high speed networks with clustering topology.
文摘Using powerful concepts and tools borrowed from the seminal arsenal connecting physics fundamentals with esoteric set theoretical operations developed in recent years by Alexandria E-infinity theoretician M. S. El Naschie, this paper explores the deep implications of some of the dualities Dr El Naschie has identified and analyzed in his exposes, connecting them with our own Xonic Quantum Physics (XQP) which places dynamical action rather than spacetime and energy at the core of the System of the World.
文摘We utilize two different theories to prove that cosmic dark energy density is the complimentary Legendre transformation of ordinary energy and vice versa as given by E(dark) = mc2 (21/22) and E(ordinary) = mc2/22. The first theory used is based on G ‘t Hooft’s remarkably simple renormalization procedure in which a neat mathematical maneuver is introduced via the dimensionality of our four dimensional spacetime. Thus, ‘t Hooft used instead of D = 4 and then took at the end of an intricate and subtle computation the limit to obtain the result while avoiding various problems including the pole singularity at D = 4. Here and in contradistinction to the classical form of dimensional and renormalization we set and do not take the limit where and is the theoretically and experimentally well established Hardy’s generic quantum entanglement. At the end we see that the dark energy density is simply the ratio of and the smooth disentangled D = 4, i.e. (dark) = (4 -k)/4 = 3.8196011/4 = 0.9549150275. Consequently where we have ignored the fine structure details by rounding 21 + k to 21 and 22 + k to 22 in a manner not that much different from of the original form of dimensional regularization theory. The result is subsequently validated by another equally ingenious approach due mainly to E. Witten and his school of topological quantum field theory. We notice that in that theory the local degrees of freedom are zero. Therefore, we are dealing essentially with pure gravity where are the degrees of freedom and is the corresponding dimension. The results and the conclusion of the paper are summarized in Figure 1-3, Table 1 and Flow Chart 1.
文摘Solid-state quantum computation station belongs to the group 2 of manipulation of quantum state in the Synergetic Extreme Condition User Facility. Here we will first outline the research background, aspects, and objectives of the station, followed by a discussion of the recent scientific as well as technological progress in this field based on similar experimental facilities to be constructed in the station. Finally, a brief summary and research perspective will be presented.
文摘Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be named as anyons satisfying a generalized statistics.These anyons can be related with topological phases of matter.Interestingly,anyons can be used to encode qubits to perform quantum computations with specific advantages in which the corresponding qubits are naturally fault tolerant due to topological protection.[1,2]This approach is called topological quantum computation.However,its implementation based on natural systems still seems far from realization.
基金supported by the National Basic Research Program of China (Grant Nos. 2016YFA0300601, 2017YFA0304700, and 2015CB921402)the National Natural Science Foundation of China (Grant Nos. 11527806, 92065203, 12074417, 11874406, and 11774405)+2 种基金the Beijing Academy of Quantum Information Sciences (Grant No. Y18G08)the Strategic Priority Research Program B of Chinese Academy of Sciences (Grant Nos. XDB33010300, XDB28000000, and XDB07010100)the Synergetic Extreme Condition User Facility sponsored by the National Development and Reform Commission。
文摘We studied anomalous Josephson effect(AJE) in Josephson trijunctions fabricated on Bi_(2)Se_(3), and found that the AJE in T-shaped trijunctions significantly alters the Majorana phase diagram of the trijunctions, when an in-plane magnetic field is applied parallel to two of the three single junctions. Such a phenomenon in topological insulator-based Josephson trijunction provides unambiguous evidence for the existence of AJE in the system, and may provide an additional knob for controlling the Majorana bound states in the Fu–Kane scheme of topological quantum computation.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.12172095,11832009,and 12302008)the Natural Science Foundation of Guangdong Province(Grant No.2023A1515011770)Guangzhou Science and Technology Planning Project(Grant Nos.202201010570,202201020239,202201020193,and 202201010399)。
文摘Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phenomenon of local failure is modeled in fail-safe topology optimization.In this work,we first conduct a comprehensive study to explore the impact of patch size,shape,and distribution on the robustness of fail-safe designs.The findings suggest that larger sizes and finer distribution of material patches can yield more robust fail-safe structures.However,a finer patch distribution can significantly increase computational costs,particularly for 3D structures.To keep computational efforts tractable,an efficient fail-safe topology optimization approach is established based on the framework of multi-resolution topology optimization(MTOP).Within the MTOP framework,the extended finite element method is introduced to establish a decoupling connection between the analysis mesh and the topology description model.Numerical examples demonstrate that the developed methodology is 2 times faster for 2D problems and over 25 times faster for 3D problems than traditional fail-safe topology optimization while maintaining similar levels of robustness.
基金Project (No. MTM2005-08690-C02-02) partially supported by the Spanish Ministry of Science and Innovation Grant
文摘We introduce a new algebraic approach dealing with the problem of computing the topology of an arrangement of a finite set of real algebraic plane curves presented implicitly. The main achievement of the presented method is a complete avoidance of irrational numbers that appear when using the sweeping method in the classical way for solving the problem at hand. Therefore, it is worth mentioning that the efficiency of the proposed method is only assured for low-degree curves.
基金the National Key R&D Program of China(2020YFB1708300)the National Natural Science Foundation of China(52005192)the Project of Ministry of Industry and Information Technology(TC210804R-3).
文摘This paper aims to solve large-scale and complex isogeometric topology optimization problems that consumesignificant computational resources. A novel isogeometric topology optimization method with a hybrid parallelstrategy of CPU/GPU is proposed, while the hybrid parallel strategies for stiffness matrix assembly, equationsolving, sensitivity analysis, and design variable update are discussed in detail. To ensure the high efficiency ofCPU/GPU computing, a workload balancing strategy is presented for optimally distributing the workload betweenCPU and GPU. To illustrate the advantages of the proposedmethod, three benchmark examples are tested to verifythe hybrid parallel strategy in this paper. The results show that the efficiency of the hybrid method is faster thanserial CPU and parallel GPU, while the speedups can be up to two orders of magnitude.
基金supported by the Research Center for Nature-Inspired Science and Technology,The Hong Kong Polytechnic University(Project No.:CE1T).
文摘Heterogeneous strain engineering offers a promising approach for developing high-performance stretchable strain sensors,but the optimal strain distributions remain unexplored.Herein,we derive the optimal strain topology for achieving maximum sensitivities using Monte Carlo simulations,and identify the key sensitivity-regulating parameters,thus establishing a general computational design guideline.Mathematical analysis demonstrates that within the optimal topology,sensitivity is maximized by reducing the strain value of low-strain regions or increasing their area proportion.As proof of concept,patterned graphene strain sensors(PGSSs)featuring parameterized grooves are designed with their small strain values and proportions precisely modulated via finite element analysis.Adjusting these parameters enhances sensitivity by factors of~10.7 and 3.3,with the highest gauge factor reaching 25,600 at 100%strain.Furthermore,the PGSSs can effectively detect human body motions and gauge object dimensions when integrated with robot grippers.The computational framework exhibits applicability across different heterogeneous strain engineering methods.
基金supported by the Innovation Program for Quantum Science and Technology(Grant Nos.2021ZD0301400 and 2023ZD0301500)the National Natural Science Foundation of China(Grant Nos.62335019 and 62475291).
文摘Reconfigurable linear optical networks based on Mach-Zehnder interferometer(MZI)offer significant potential in optical information processing,particularly in emerging photonic quantum computing systems.However,device losses and calibration errors accumulate as network complexity grows,posing challenges in performing precise mapping of matrix operations.Existing architectures,such as Diamond and Bokun,introduce MZI redundancy into Reck and Clements architectures to improve reliability,which increases complexity and differential path losses that limit scalability.We propose a compact topology architecture that achieves 100%fidelity by employing a symmetrical MZI to decouple optical loss from power ratio and introducing extra MZIs to enforce uniform loss distributions.This multi-level optimization enables direct monitoring pathways while supporting precise calibration,and it approaches theoretical fidelity in practical deployments with direct implications for scalable and fault-tolerant photonic computing systems.
文摘With a paper published in the 19 February 2025 issue of Nature[1],Microsoft(Redmond,WA,USA)fanned the flames of its unique vision for quantum computing:a stable,error-resistant qubit based on the Majorana fermion,one of the strangest and most elusive particles in physics.The Microsoft Azure Quantum research team’s descriptions of a means to detect the as-yet theoretical particles[1]—called“an entirely new state of matter”by Microsoft’s chief executive officer[2]—and a design for a chip powered by them(Fig.1)[3]have refocused attention on the company’s ambition to build a topological quantum computer.The approach—if it works—could potentially leapfrog every other in the field.
基金Supported by the National Natural Science Foundation of China(91112003)
文摘The medical monitoring system is widely used. In the medical monitoring system, each user only possesses one piece of data logging that participates in statistical computing. Specifically in such a situation, a feasible solution is to scatter its statistical computing workload to corresponding statistical nodes. Moreover, there are still two problems that should be resolved. One is how the server takes advantage of intermediate results obtained through statistical node aggregation to perform statistical computing. Statistical variable decomposition technique points out the direction for statistical projects. The other problem is how to design an efficient topological structure for statistical computing. In this paper, tree topology was adopted to implement data aggregation to improve aggregation efficiency. And two experiments were done for time consumption of statistical computing which focuses on encrypted data aggregation and encrypted data computing. The first experiment indicates that encrypted data aggregation efficiency of the scheme proposed in this paper is better than that of Drosatos' scheme, and the second indicates that improving computing power of the server or computational efficiency of the functional encryption scheme can shorten the computation time.
基金Projects(11372055,11302033)supported by the National Natural Science Foundation of ChinaProject supported by the Huxiang Scholar Foundation from Changsha University of Science and Technology,ChinaProject(2012KFJJ02)supported by the Key Labortory of Lightweight and Reliability Technology for Engineering Velicle,Education Department of Hunan Province,China
文摘A methodology for topology optimization based on element independent nodal density(EIND) is developed.Nodal densities are implemented as the design variables and interpolated onto element space to determine the density of any point with Shepard interpolation function.The influence of the diameter of interpolation is discussed which shows good robustness.The new approach is demonstrated on the minimum volume problem subjected to a displacement constraint.The rational approximation for material properties(RAMP) method and a dual programming optimization algorithm are used to penalize the intermediate density point to achieve nearly 0-1 solutions.Solutions are shown to meet stability,mesh dependence or non-checkerboard patterns of topology optimization without additional constraints.Finally,the computational efficiency is greatly improved by multithread parallel computing with OpenMP.
文摘Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. In GrCC, it is a simplical complex, called the nerve of the covering in combinatorial topology. For LNS, the structure has no known description. (3) The approximation space of RST is a topological space generated by a partition, called a clopen space. For LNS, it is a generalized/pretopological space which is more general than topological space. For GrCC,there are two possibilities. One is a special case of LNS,which is the topological space generated by the covering. There is another topological space, the topology generated by the finite intersections of the members of a covering The first one treats covering as a base, the second one as a subbase. (4) Knowledge representations in RST are symbol-valued systems. In GrCC, they are expression-valued systems. In LNS, they are multivalued system; reported in 1998 . (5) RST and GRCC representation theories are complete in the sense that granular models can be recaptured fully from the knowledge representations.
文摘This paper presents partially asynchronous parallel simulation of continuous-system (PAPSoCS) and some approaches to the issues of its implementation on a multicomputer system. To guarantee the simulation results correct and speedup the simulation, the scheme for efficient PAPSoCS is proposed and the virtual topology star is constructed to match the path of message passing for solving algorithm-architecture adequation problem. Under the circumstances that messages frequently passed inter-processor are much shorter, typically within several 4 bytes, asynchronous communication mode is employed to reduce the communication ratio. Experiment results show that asynchronous parallel simulation has much higher efficiency than its synchronous counterpart.