Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorith...Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.展开更多
The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of a...The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of algorithms,including approximately ten algorithms,have been developed for designing high-efficiency supermirror structures.In addition to its applications in neutron guides,in recent years,the use of neutron supermirrors in neutronfocusing mirrors has been proposed to advance the development of neutron scattering and neutron imaging instruments,especially those at compact neutron sources.In this new application scenario,the performance of supermirrors strongly affects the instrument performance;therefore,a careful evaluation of the design algorithms is needed.In this study,we examine two issues:the effect of nonuniform film thickness distribution on a curved substrate and the effect of the specific neutron intensity distribution on the performance of neutron supermirrors designed using existing algorithms.The effect of film thickness nonuniformity is found to be relatively insignificant,whereas the effect of the neutron intensity distribution over Q(where Q is the magnitude of the scattering vector of incident neutrons)is considerable.Selection diagrams that show the best design algorithm under different conditions are obtained from these results.When the intensity distribution is not considered,empirical algorithms can obtain the highest average reflectivity,whereas discrete algorithms perform best when the intensity distribution is taken into account.The reasons for the differences in performance between algorithms are also discussed.These findings provide a reference for selecting design algorithms for supermirrors for use in neutron optical devices with unique geometries and can be very helpful for improving the performance of focusing supermirror-based instruments.展开更多
Satellite constellation design for space optical systems is essentially a multiple-objective optimization problem. In this work, to tackle this challenge, we first categorize the performance metrics of the space optic...Satellite constellation design for space optical systems is essentially a multiple-objective optimization problem. In this work, to tackle this challenge, we first categorize the performance metrics of the space optical system by taking into account the system tasks(i.e., target detection and tracking). We then propose a new non-dominated sorting genetic algorithm(NSGA) to maximize the system surveillance performance. Pareto optimal sets are employed to deal with the conflicts due to the presence of multiple cost functions. Simulation results verify the validity and the improved performance of the proposed technique over benchmark methods.展开更多
A design approach is presented in this paper for underactuation in robotic finger mechanisms. The characters of underactuated finger mechanisms are introduced as based on linkage and spring systems. The feature of sel...A design approach is presented in this paper for underactuation in robotic finger mechanisms. The characters of underactuated finger mechanisms are introduced as based on linkage and spring systems. The feature of self-adaptive enveloping grasp by underactuated finger mechanisms is discussed with feasible in grasping unknown objects. The design problem of robotic fingers is analyzed by looking at many aspects for an optimal functionality. Design problems and requirements for underactuated mechanisms are formulated as related to human-like robotic fingers. In particular, characteristics of finger mechanisms are analyzed and optimality criteria are summarized with the aim to formulate a general design algorithm. A general multi-objective optimization design approach is applied as based on a suitable optimization problem by using suitable expressions of optimality criteria. An example is illustrated as an improvement of finger mechanism in Laboratory of Robotics and Mechatronics (LARM) Hand. Results of design outputs and grasp simulations are reported with the aim to show the practical feasibility of the proposed concepts and computations.展开更多
Based on the trajectory design of a mission to Saturn, this paper discusses four different trajectories in various swingby cases. We assume a single impulse to be applied in each case when the spacecraft approaches a ...Based on the trajectory design of a mission to Saturn, this paper discusses four different trajectories in various swingby cases. We assume a single impulse to be applied in each case when the spacecraft approaches a celestial body. Some optimal trajectories ofEJS, EMS, EVEJS and EVVEJS flying sequences are obtained using five global optimization algorithms: DE, PSO, DP, the hybrid algorithm PSODE and another hybrid algorithm, DPDE. DE is proved to be supe- rior to other non-hybrid algorithms in the trajectory optimi- zation problem. The hybrid algorithm of PSO and DE can improve the optimization performance of DE, which is vali- dated by the mission to Saturn with given swingby sequences. Finally, the optimization results of four different swingby sequences are compared with those of the ACT of ESA.展开更多
Advanced engineering systems, like aircraft, are defined by tens or even hundreds of design variables. Building an accurate surrogate model for use in such high-dimensional optimization problems is a difficult task ow...Advanced engineering systems, like aircraft, are defined by tens or even hundreds of design variables. Building an accurate surrogate model for use in such high-dimensional optimization problems is a difficult task owing to the curse of dimensionality. This paper presents a new algorithm to reduce the size of a design space to a smaller region of interest allowing a more accurate surrogate model to be generated. The framework requires a set of models of different physical or numerical fidelities. The low-fidelity (LF) model provides physics-based approximation of the high-fidelity (HF) model at a fraction of the computational cost. It is also instrumental in identifying the small region of interest in the design space that encloses the high-fidelity optimum. A surrogate model is then constructed to match the low-fidelity model to the high-fidelity model in the identified region of interest. The optimization process is managed by an update strategy to prevent convergence to false optima. The algorithm is applied on mathematical problems and a two-dimen-sional aerodynamic shape optimization problem in a variable-fidelity context. Results obtained are in excellent agreement with high-fidelity results, even with lower-fidelity flow solvers, while showing up to 39% time savings.展开更多
The plow of the submarine plowing trencher is one of the main functional mechanisms, and its optimization is very important. The design parameters play a very significant role in determining the requirements of the to...The plow of the submarine plowing trencher is one of the main functional mechanisms, and its optimization is very important. The design parameters play a very significant role in determining the requirements of the towing force of a vessel. A multi-objective genetic algorithm based on analytical models of the plow surface has been examined and applied in efforts to obtain optimal design of the plow. For a specific soil condition, the draft force and moldboard surface area which are the key parameters in the working process of the plow are optimized by finding the corresponding optimal values of the plow blade penetration angle and two surface angles of the main cutting blade of the plow. Parameters such as the moldboard side angle of deviation, moldboard lift angle, angular variation of the tangent line, and the spanning length are also analyzed with respect to the force of the moldboard surface along soil flow direction. Results show that the optimized plow has an improved plow performance. The draft forces of the main cutting blade and the moldboard are 10.6% and 7%, respectively, less than the original design. The standard deviation of Gaussian curvature of moldboard is lowered by 64.5%, which implies that the smoothness of the optimized moldboard surface is much greater than the original.展开更多
K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer ...K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer distribution of the biological sequence.Problems,such as similarity calculations or sequence assembly,can be described in the k-mer vector space.It helps us to identify new features of an old sequence-based problem in bioinformatics and develop new algorithms using the concepts and methods from linear space theory.In this study,we defined the k-mer vector space for the generalized biological sequences.The meaning of corresponding vector operations is explained in the biological context.We presented the vector/matrix form of several widely seen sequence-based problems,including read quantification,sequence assembly,and pattern detection problem.Its advantages and disadvantages are discussed.Also,we implement a tool for the sequence assembly problem based on the concepts of k-mer vector methods.It shows the practicability and convenience of this algorithm design strategy.展开更多
Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic nois...Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic noise level of measuring instruments.This leads to the need for multiple measurements with subsequent statistical processing.In this paper,the digital algorithms are proposed for the automatic measurement of the JJ parameters by IVC.These algorithms make it possible to implement multiple measurements and check these JJ parameters in an automatic mode with the required accuracy.The complete sufficient statistics are used to minimize the root-mean-square error of parameter measurement.A sequence of current pulses with slow rising and falling edges is used to drive JJ,and synchronous current and voltage readings at JJ are used to realize measurement algorithms.The algorithm performance is estimated through computer simulations.The significant advantage of the proposed algorithms is the independence from current source noise and intrinsic noise of current and voltage meters,as well as the simple implementation in automatic digital measuring systems.The proposed algorithms can be used to control JJ parameters during mass production of superconducting integrated circuits,which will improve the production efficiency and product quality.展开更多
The purpose of computer-aided design of new adaptive pulsed arc technologies of welding is: to de- sign optimum algorithms of pulsed control over main energy parameters of welding.It permits:to in- crease welding ...The purpose of computer-aided design of new adaptive pulsed arc technologies of welding is: to de- sign optimum algorithms of pulsed control over main energy parameters of welding.It permits:to in- crease welding productivity, to stabilize the welding regime, to control weld formation,taking into ac- count its spatial position, to proveal specie strength of the welded and coatings. Computer- aided design reduces the time of development of new pulsed arc technology:provides the optimization of technological referes according to the operating conditions of welded joints,the prediction of the ser- vice life of the welds.The developed methodology of computer-aided design of advanced technologies, models, original software, adaptive algorithms of pulsed control, and spend equipment permits to regulate penetration,the weld shape, the sizes of heat - affected zone; to predict sired properties and quality of welded joints.展开更多
Obtaining the optimal values of the parameters for th e design of a required mould and the operation of the moulding process are diffi cult, this is due to the complexity of product geometry and the variation of pla s...Obtaining the optimal values of the parameters for th e design of a required mould and the operation of the moulding process are diffi cult, this is due to the complexity of product geometry and the variation of pla stic material properties. The typical parameters for the mould design and mouldi ng process are melt flow length, injection pressure, holding pressure, back pres sure, injection speed, melt temperature, mould temperature, clamping force, inje ction time, holding time and cooling time. This paper discusses the difficulties of using the current computer aided optimization methods to acquire the values of the parameters. A method that is based on the concept of genetic algorithm is proposed to overcome the difficulties. The proposed method describes in details on how to attain the optimal values of the parameters form a given product geom etry.展开更多
In order to shorten the design period, the paper describes a new optimization strategy for computationally expensive design optimization of turbomachinery, combined with design of experiment (DOE), response surface mo...In order to shorten the design period, the paper describes a new optimization strategy for computationally expensive design optimization of turbomachinery, combined with design of experiment (DOE), response surface models (RSM), genetic algorithm (GA) and a 3-D Navier-Stokes solver(Numeca Fine). Data points for response evaluations were selected by improved distributed hypercube sampling (IHS) and the 3-D Navier-Stokes analysis was carried out at these sample points. The quadratic response surface model was used to approximate the relationships between the design variables and flow parameters. To maximize the adiabatic efficiency, the genetic algorithm was applied to the response surface model to perform global optimization to achieve the optimum design of NASA Stage 35. An optimum leading edge line was found, which produced a new 3-D rotor blade combined with sweep and lean, and a new stator one with skew. It is concluded that the proposed strategy can provide a reliable method for design optimization of turbomachinery blades at reasonable computing cost.展开更多
Considering the essential and influential role of centrifugal compressors in a wide range of industries makes most of engineers research and study on design and optimization of centrifugal compressors. Centrifugal com...Considering the essential and influential role of centrifugal compressors in a wide range of industries makes most of engineers research and study on design and optimization of centrifugal compressors. Centrifugal compressors are the key to part ofoil, gas and petrochemical industries as well as gas pipeline transports. Since complete 3D design of the compressor consumes a considerable amount of time, most of active companies in the field, are profoundly interested in obtaining a design outline before taking any further steps in designing the entire machine. In this paper, a numerical algorithm, named ACDA (adapted compressor design algorithm) for fast and accurate preliminary design of centrifugal compressor is presented. The design procedure is obtained under real gas behavior, using an appropriate equation of state. Starting from impeller inlet, the procedure is continued on by resulting in numerical calculation for other sections including impeller exit, volute and exit diffuser. Clearly, in any step suitable correction factors are employed in order to conclude in precise numerical results. Finally, the achieved design result is compared with available reference data.展开更多
The design of roof frame is one of the most important parts of LNG tank design.In China,however,the calculation of roof frame system of extra-large LNG tanks is currently faced with a series of problems.For example,th...The design of roof frame is one of the most important parts of LNG tank design.In China,however,the calculation of roof frame system of extra-large LNG tanks is currently faced with a series of problems.For example,there is no united yardstick on buckling characteristic value,the calculation is based on many assumptions,and the calculation is inconsistent with domestic specifications and stipulations.In view of these problems,the material non-linearity and structural non-linearity were introduced and the initial defect was taken into consideration.Then,the large non-linear finite element calculation software ABAQUS was adopted to carry out modeling on the roof frame and liner system of extra-large LNG tanks and calculate and analyze the force applied on them and their stability.Finally,a complete set of design algorithm for the roof frame and liner system of extra-large LNG tanks was established and applied to the design of a certain LNG tank(20×10^(4)m^(3))in China.It is indicated that this design algorithm can simulate the actual situations accurately.This design algorithm is structurally composed of shell units and beam units,and it is connected in the pattern of common node.Besides,force calculation is conducted in 10 operational modes and the buckling calculation in 7 operational modes,including all operational modes in the construction process of roof frame and liner system of LNG tanks.It is also revealed that the maximum stress on the roof frame is 125.7 MPa,that on the liner is 101.4 MPa and the minimum safety coefficient used for buckling calculation is 2.57.Under this system,the force and stability of the roof frame of LNG tanks are satisfactory.The research results can be used as reference for relevant design and calculation.展开更多
As society confronts increasingly complex demands and the growing need for carbon-neutral architecture,AI-driven design methodologies are evolving rapidly.However,the lack of a unified integration platform in the desi...As society confronts increasingly complex demands and the growing need for carbon-neutral architecture,AI-driven design methodologies are evolving rapidly.However,the lack of a unified integration platform in the design process continues to hinder AI’s integration into real-world workflows.To address this challenge,we introduce ArchiWeb,a web-based platform specifically built to support AI-driven processes in early-stage architectural design.ArchiWeb transforms architectural representation and problem formulation by utilizing lightweight data protocols and a modular algorithmic network within an interactive web environment.Through its cloud-native,open-architecture framework,ArchiWeb enables deeper integration of AI technologies while accelerating the accumulation,sharing,and reuse of design knowledge across projects and disciplines.Ultimately,ArchiWeb aims to drive architectural design toward greater intelligence,efficiency,and sustainability―supporting the transition to data-informed,computationally enabled,and environmentally responsible design practices.展开更多
This paper presents an overview of deep learning(DL)-based algorithms designed for solving the traveling salesman problem(TSP),categorizing them into four categories:end-to-end construction algorithms,end-to-end impro...This paper presents an overview of deep learning(DL)-based algorithms designed for solving the traveling salesman problem(TSP),categorizing them into four categories:end-to-end construction algorithms,end-to-end improvement algorithms,direct hybrid algorithms,and large language model(LLM)-based hybrid algorithms.We introduce the principles and methodologies of these algorithms,outlining their strengths and limitations through experimental comparisons.End-to-end construction algorithms employ neural networks to generate solutions from scratch,demonstrating rapid solving speed but often yielding subpar solutions.Conversely,end-to-end improvement algorithms iteratively refine initial solutions,achieving higher-quality outcomes but necessitating longer computation times.Direct hybrid algorithms directly integrate deep learning with heuristic algorithms,showcasing robust solving performance and generalization capability.LLM-based hybrid algorithms leverage LLMs to autonomously generate and refine heuristics,showing promising performance despite being in early developmental stages.In the future,further integration of deep learning techniques,particularly LLMs,with heuristic algorithms and advancements in interpretability and generalization will be pivotal trends in TSP algorithm design.These endeavors aim to tackle larger and more complex realworld instances while enhancing algorithm reliability and practicality.This paper offers insights into the evolving landscape of DL-based TSP solving algorithms and provides a perspective for future research directions.展开更多
The product family design problem solved by evolutionary algorithms is discussed. A successful product family design method should achieve an optimal tradeoff among a set of competing objectives, which involves maximi...The product family design problem solved by evolutionary algorithms is discussed. A successful product family design method should achieve an optimal tradeoff among a set of competing objectives, which involves maximizing commonality across the family of products and optimizing the performances of each product in the family. A 2-level chromosome structured genetic algorithm (2LCGA) is proposed to solve this class of problems and its performance is analyzed in comparing its results with those obtained with other methods. By interpreting the chromosome as a 2-level linear structure, the variable commonality genetic algorithm (GA) is constructed to vary the amount of platform commonality and automatically searches across varying levels of commonality for the platform while trying to resolve the tradeoff between commonality and individual product performance within the product family during optimization process. By incorporating a commonality assessing index to the problem formulation, the 2LCGA optimize the product platform and its corresponding family of products in a single stage, which can yield improvements in the overall performance of the product family compared with two-stage approaches (the first stage involves determining the best settings for the platform variables and values of unique variables are found for each product in the second stage). The scope of the algorithm is also expanded by introducing a classification mechanism to allow mul- tiple platforms to be considered during product family optimization, offering opportunities for superior overall design by more efficacious tradeoffs between commonality and performance. The effectiveness of 2LCGA is demonstrated through the design of a family of universal electric motors and comparison against previous results.展开更多
An inverted pendulum is a sensitive system of highly coupled parameters, in laboratories, it is popular for modelling nonlinear systems such as mechanisms and control systems, and also for optimizing programmes before...An inverted pendulum is a sensitive system of highly coupled parameters, in laboratories, it is popular for modelling nonlinear systems such as mechanisms and control systems, and also for optimizing programmes before those programmes are applied in real situations. This study aims to find the optimum input setting for a double inverted pendulum(DIP), which requires an appropriate input to be able to stand and to achieve robust stability even when the system model is unknown. Such a DIP input could be widely applied in engineering fields for optimizing unknown systems with a limited budget. Previous studies have used various mathematical approaches to optimize settings for DIP, then have designed control algorithms or physical mathematical models.This study did not adopt a mathematical approach for the DIP controller because our DIP has five input parameters within its nondeterministic system model. This paper proposes a novel algorithm, named Uni Neuro, that integrates neural networks(NNs) and a uniform design(UD) in a model formed by input and response to the experimental data(metamodel). We employed a hybrid UD multiobjective genetic algorithm(HUDMOGA) for obtaining the optimized setting input parameters. The UD was also embedded in the HUDMOGA for enriching the solution set, whereas each chromosome used for crossover, mutation, and generation of the UD was determined through a selection procedure and derived individually. Subsequently, we combined the Euclidean distance and Pareto front to improve the performance of the algorithm. Finally, DIP equipment was used to confirm the settings. The proposed algorithm can produce 9 alternative configured input parameter values to swing-up then standing in robust stability of the DIP from only 25 training data items and 20 optimized simulation results. In comparison to the full factorial design, this design can save considerable experiment time because the metamodel can be formed by only 25 experiments using the UD. Furthermore, the proposed algorithm can be applied to nonlinear systems with multiple constraints.展开更多
Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with s...Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies.展开更多
In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied result...In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied results; local adjustment cannot save the clustering result from poor local optima. If there is an anomaly in a cluster, it will seriously affect the cluster mean value. The K-means clustering algorithm is only suitable for clusters with convex shapes. We therefore propose a novel clustering algorithm CARDBK—"centroid all rank distance(CARD)" which means that all centroids are sorted by distance value from one point and "BK" are the initials of "batch K-means"—in which one point not only modifies a cluster centroid nearest to this point but also modifies multiple clusters centroids adjacent to this point, and the degree of influence of a point on a cluster centroid depends on the distance value between this point and the other nearer cluster centroids. Experimental results showed that our CARDBK algorithm outperformed other algorithms when tested on a number of different data sets based on the following performance indexes: entropy, purity, F1 value, Rand index and normalized mutual information(NMI). Our algorithm manifested to be more stable, linearly scalable and faster.展开更多
文摘Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.
基金supported by the National Natural Science Foundation of China (Nos. 12027810 and 11322548)
文摘The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of algorithms,including approximately ten algorithms,have been developed for designing high-efficiency supermirror structures.In addition to its applications in neutron guides,in recent years,the use of neutron supermirrors in neutronfocusing mirrors has been proposed to advance the development of neutron scattering and neutron imaging instruments,especially those at compact neutron sources.In this new application scenario,the performance of supermirrors strongly affects the instrument performance;therefore,a careful evaluation of the design algorithms is needed.In this study,we examine two issues:the effect of nonuniform film thickness distribution on a curved substrate and the effect of the specific neutron intensity distribution on the performance of neutron supermirrors designed using existing algorithms.The effect of film thickness nonuniformity is found to be relatively insignificant,whereas the effect of the neutron intensity distribution over Q(where Q is the magnitude of the scattering vector of incident neutrons)is considerable.Selection diagrams that show the best design algorithm under different conditions are obtained from these results.When the intensity distribution is not considered,empirical algorithms can obtain the highest average reflectivity,whereas discrete algorithms perform best when the intensity distribution is taken into account.The reasons for the differences in performance between algorithms are also discussed.These findings provide a reference for selecting design algorithms for supermirrors for use in neutron optical devices with unique geometries and can be very helpful for improving the performance of focusing supermirror-based instruments.
文摘Satellite constellation design for space optical systems is essentially a multiple-objective optimization problem. In this work, to tackle this challenge, we first categorize the performance metrics of the space optical system by taking into account the system tasks(i.e., target detection and tracking). We then propose a new non-dominated sorting genetic algorithm(NSGA) to maximize the system surveillance performance. Pareto optimal sets are employed to deal with the conflicts due to the presence of multiple cost functions. Simulation results verify the validity and the improved performance of the proposed technique over benchmark methods.
基金supported by Key International S&T Cooperation Project (Grant No. 2008DFA81280)Part of this work has been developed within the project No.27 of the Italy-China program 2006–2009+1 种基金A joined study of first author at Laboratory of Robotics and Mechatronics (LARM) during 2007–2008 has been supported by state scholarship program of China Scholarship Council (CSC)Innovation Foundation of Beijing University of Aeronautics and Astronautics (BUAA) for PhD Graduates
文摘A design approach is presented in this paper for underactuation in robotic finger mechanisms. The characters of underactuated finger mechanisms are introduced as based on linkage and spring systems. The feature of self-adaptive enveloping grasp by underactuated finger mechanisms is discussed with feasible in grasping unknown objects. The design problem of robotic fingers is analyzed by looking at many aspects for an optimal functionality. Design problems and requirements for underactuated mechanisms are formulated as related to human-like robotic fingers. In particular, characteristics of finger mechanisms are analyzed and optimality criteria are summarized with the aim to formulate a general design algorithm. A general multi-objective optimization design approach is applied as based on a suitable optimization problem by using suitable expressions of optimality criteria. An example is illustrated as an improvement of finger mechanism in Laboratory of Robotics and Mechatronics (LARM) Hand. Results of design outputs and grasp simulations are reported with the aim to show the practical feasibility of the proposed concepts and computations.
基金supported by the National Natural Science Foundation of China (10832004 and 10672084).
文摘Based on the trajectory design of a mission to Saturn, this paper discusses four different trajectories in various swingby cases. We assume a single impulse to be applied in each case when the spacecraft approaches a celestial body. Some optimal trajectories ofEJS, EMS, EVEJS and EVVEJS flying sequences are obtained using five global optimization algorithms: DE, PSO, DP, the hybrid algorithm PSODE and another hybrid algorithm, DPDE. DE is proved to be supe- rior to other non-hybrid algorithms in the trajectory optimi- zation problem. The hybrid algorithm of PSO and DE can improve the optimization performance of DE, which is vali- dated by the mission to Saturn with given swingby sequences. Finally, the optimization results of four different swingby sequences are compared with those of the ACT of ESA.
文摘Advanced engineering systems, like aircraft, are defined by tens or even hundreds of design variables. Building an accurate surrogate model for use in such high-dimensional optimization problems is a difficult task owing to the curse of dimensionality. This paper presents a new algorithm to reduce the size of a design space to a smaller region of interest allowing a more accurate surrogate model to be generated. The framework requires a set of models of different physical or numerical fidelities. The low-fidelity (LF) model provides physics-based approximation of the high-fidelity (HF) model at a fraction of the computational cost. It is also instrumental in identifying the small region of interest in the design space that encloses the high-fidelity optimum. A surrogate model is then constructed to match the low-fidelity model to the high-fidelity model in the identified region of interest. The optimization process is managed by an update strategy to prevent convergence to false optima. The algorithm is applied on mathematical problems and a two-dimen-sional aerodynamic shape optimization problem in a variable-fidelity context. Results obtained are in excellent agreement with high-fidelity results, even with lower-fidelity flow solvers, while showing up to 39% time savings.
基金Supported the National Natural Science Foundation of China (No. 51179040) Natural Science Foundation of Heilongjiang Province (No. E200904)
文摘The plow of the submarine plowing trencher is one of the main functional mechanisms, and its optimization is very important. The design parameters play a very significant role in determining the requirements of the towing force of a vessel. A multi-objective genetic algorithm based on analytical models of the plow surface has been examined and applied in efforts to obtain optimal design of the plow. For a specific soil condition, the draft force and moldboard surface area which are the key parameters in the working process of the plow are optimized by finding the corresponding optimal values of the plow blade penetration angle and two surface angles of the main cutting blade of the plow. Parameters such as the moldboard side angle of deviation, moldboard lift angle, angular variation of the tangent line, and the spanning length are also analyzed with respect to the force of the moldboard surface along soil flow direction. Results show that the optimized plow has an improved plow performance. The draft forces of the main cutting blade and the moldboard are 10.6% and 7%, respectively, less than the original design. The standard deviation of Gaussian curvature of moldboard is lowered by 64.5%, which implies that the smoothness of the optimized moldboard surface is much greater than the original.
基金the National Natural Science Foundation of China(11771393,11632015)the Natural Sci-ence Foundation of Zhejiang Province,China(LZ14A010002).
文摘K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer distribution of the biological sequence.Problems,such as similarity calculations or sequence assembly,can be described in the k-mer vector space.It helps us to identify new features of an old sequence-based problem in bioinformatics and develop new algorithms using the concepts and methods from linear space theory.In this study,we defined the k-mer vector space for the generalized biological sequences.The meaning of corresponding vector operations is explained in the biological context.We presented the vector/matrix form of several widely seen sequence-based problems,including read quantification,sequence assembly,and pattern detection problem.Its advantages and disadvantages are discussed.Also,we implement a tool for the sequence assembly problem based on the concepts of k-mer vector methods.It shows the practicability and convenience of this algorithm design strategy.
基金the Ministry of Science and Higher Education of the Russian Federation under Grant No.FSUN-2023-0007.
文摘Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic noise level of measuring instruments.This leads to the need for multiple measurements with subsequent statistical processing.In this paper,the digital algorithms are proposed for the automatic measurement of the JJ parameters by IVC.These algorithms make it possible to implement multiple measurements and check these JJ parameters in an automatic mode with the required accuracy.The complete sufficient statistics are used to minimize the root-mean-square error of parameter measurement.A sequence of current pulses with slow rising and falling edges is used to drive JJ,and synchronous current and voltage readings at JJ are used to realize measurement algorithms.The algorithm performance is estimated through computer simulations.The significant advantage of the proposed algorithms is the independence from current source noise and intrinsic noise of current and voltage meters,as well as the simple implementation in automatic digital measuring systems.The proposed algorithms can be used to control JJ parameters during mass production of superconducting integrated circuits,which will improve the production efficiency and product quality.
文摘The purpose of computer-aided design of new adaptive pulsed arc technologies of welding is: to de- sign optimum algorithms of pulsed control over main energy parameters of welding.It permits:to in- crease welding productivity, to stabilize the welding regime, to control weld formation,taking into ac- count its spatial position, to proveal specie strength of the welded and coatings. Computer- aided design reduces the time of development of new pulsed arc technology:provides the optimization of technological referes according to the operating conditions of welded joints,the prediction of the ser- vice life of the welds.The developed methodology of computer-aided design of advanced technologies, models, original software, adaptive algorithms of pulsed control, and spend equipment permits to regulate penetration,the weld shape, the sizes of heat - affected zone; to predict sired properties and quality of welded joints.
文摘Obtaining the optimal values of the parameters for th e design of a required mould and the operation of the moulding process are diffi cult, this is due to the complexity of product geometry and the variation of pla stic material properties. The typical parameters for the mould design and mouldi ng process are melt flow length, injection pressure, holding pressure, back pres sure, injection speed, melt temperature, mould temperature, clamping force, inje ction time, holding time and cooling time. This paper discusses the difficulties of using the current computer aided optimization methods to acquire the values of the parameters. A method that is based on the concept of genetic algorithm is proposed to overcome the difficulties. The proposed method describes in details on how to attain the optimal values of the parameters form a given product geom etry.
文摘In order to shorten the design period, the paper describes a new optimization strategy for computationally expensive design optimization of turbomachinery, combined with design of experiment (DOE), response surface models (RSM), genetic algorithm (GA) and a 3-D Navier-Stokes solver(Numeca Fine). Data points for response evaluations were selected by improved distributed hypercube sampling (IHS) and the 3-D Navier-Stokes analysis was carried out at these sample points. The quadratic response surface model was used to approximate the relationships between the design variables and flow parameters. To maximize the adiabatic efficiency, the genetic algorithm was applied to the response surface model to perform global optimization to achieve the optimum design of NASA Stage 35. An optimum leading edge line was found, which produced a new 3-D rotor blade combined with sweep and lean, and a new stator one with skew. It is concluded that the proposed strategy can provide a reliable method for design optimization of turbomachinery blades at reasonable computing cost.
文摘Considering the essential and influential role of centrifugal compressors in a wide range of industries makes most of engineers research and study on design and optimization of centrifugal compressors. Centrifugal compressors are the key to part ofoil, gas and petrochemical industries as well as gas pipeline transports. Since complete 3D design of the compressor consumes a considerable amount of time, most of active companies in the field, are profoundly interested in obtaining a design outline before taking any further steps in designing the entire machine. In this paper, a numerical algorithm, named ACDA (adapted compressor design algorithm) for fast and accurate preliminary design of centrifugal compressor is presented. The design procedure is obtained under real gas behavior, using an appropriate equation of state. Starting from impeller inlet, the procedure is continued on by resulting in numerical calculation for other sections including impeller exit, volute and exit diffuser. Clearly, in any step suitable correction factors are employed in order to conclude in precise numerical results. Finally, the achieved design result is compared with available reference data.
基金Project supported by the Special and Significant Project of China National Offshore Oil Corporation“Study on the design of full-containment large LNG storage tank and engineering applications”(No.:CNOOC-KJ125ZDXM14QD-04QD11).
文摘The design of roof frame is one of the most important parts of LNG tank design.In China,however,the calculation of roof frame system of extra-large LNG tanks is currently faced with a series of problems.For example,there is no united yardstick on buckling characteristic value,the calculation is based on many assumptions,and the calculation is inconsistent with domestic specifications and stipulations.In view of these problems,the material non-linearity and structural non-linearity were introduced and the initial defect was taken into consideration.Then,the large non-linear finite element calculation software ABAQUS was adopted to carry out modeling on the roof frame and liner system of extra-large LNG tanks and calculate and analyze the force applied on them and their stability.Finally,a complete set of design algorithm for the roof frame and liner system of extra-large LNG tanks was established and applied to the design of a certain LNG tank(20×10^(4)m^(3))in China.It is indicated that this design algorithm can simulate the actual situations accurately.This design algorithm is structurally composed of shell units and beam units,and it is connected in the pattern of common node.Besides,force calculation is conducted in 10 operational modes and the buckling calculation in 7 operational modes,including all operational modes in the construction process of roof frame and liner system of LNG tanks.It is also revealed that the maximum stress on the roof frame is 125.7 MPa,that on the liner is 101.4 MPa and the minimum safety coefficient used for buckling calculation is 2.57.Under this system,the force and stability of the roof frame of LNG tanks are satisfactory.The research results can be used as reference for relevant design and calculation.
基金funded by the National Natural Science Foundation of China(No.52378008)Postgraduate Research Innovation Program of Jiangsu Province(No.KYCX22_0189).
文摘As society confronts increasingly complex demands and the growing need for carbon-neutral architecture,AI-driven design methodologies are evolving rapidly.However,the lack of a unified integration platform in the design process continues to hinder AI’s integration into real-world workflows.To address this challenge,we introduce ArchiWeb,a web-based platform specifically built to support AI-driven processes in early-stage architectural design.ArchiWeb transforms architectural representation and problem formulation by utilizing lightweight data protocols and a modular algorithmic network within an interactive web environment.Through its cloud-native,open-architecture framework,ArchiWeb enables deeper integration of AI technologies while accelerating the accumulation,sharing,and reuse of design knowledge across projects and disciplines.Ultimately,ArchiWeb aims to drive architectural design toward greater intelligence,efficiency,and sustainability―supporting the transition to data-informed,computationally enabled,and environmentally responsible design practices.
基金funded by the National Key R&D Program of China(2020YFA0907000)the National Natural Science Foundation of China(Grant Nos.32270657,32271297,82130055,62072435)the Youth Innovation Promotion Association,Chinese Academy of Sciences.
文摘This paper presents an overview of deep learning(DL)-based algorithms designed for solving the traveling salesman problem(TSP),categorizing them into four categories:end-to-end construction algorithms,end-to-end improvement algorithms,direct hybrid algorithms,and large language model(LLM)-based hybrid algorithms.We introduce the principles and methodologies of these algorithms,outlining their strengths and limitations through experimental comparisons.End-to-end construction algorithms employ neural networks to generate solutions from scratch,demonstrating rapid solving speed but often yielding subpar solutions.Conversely,end-to-end improvement algorithms iteratively refine initial solutions,achieving higher-quality outcomes but necessitating longer computation times.Direct hybrid algorithms directly integrate deep learning with heuristic algorithms,showcasing robust solving performance and generalization capability.LLM-based hybrid algorithms leverage LLMs to autonomously generate and refine heuristics,showing promising performance despite being in early developmental stages.In the future,further integration of deep learning techniques,particularly LLMs,with heuristic algorithms and advancements in interpretability and generalization will be pivotal trends in TSP algorithm design.These endeavors aim to tackle larger and more complex realworld instances while enhancing algorithm reliability and practicality.This paper offers insights into the evolving landscape of DL-based TSP solving algorithms and provides a perspective for future research directions.
基金This project is supported by National Natural Science Foundation of China(No.70471022,No.70501021)the Joint Research Scheme of National Natural Science Foundation of China(No,70418013) Hong Kong Research Grant Council,China(No.N_HKUST625/04).
文摘The product family design problem solved by evolutionary algorithms is discussed. A successful product family design method should achieve an optimal tradeoff among a set of competing objectives, which involves maximizing commonality across the family of products and optimizing the performances of each product in the family. A 2-level chromosome structured genetic algorithm (2LCGA) is proposed to solve this class of problems and its performance is analyzed in comparing its results with those obtained with other methods. By interpreting the chromosome as a 2-level linear structure, the variable commonality genetic algorithm (GA) is constructed to vary the amount of platform commonality and automatically searches across varying levels of commonality for the platform while trying to resolve the tradeoff between commonality and individual product performance within the product family during optimization process. By incorporating a commonality assessing index to the problem formulation, the 2LCGA optimize the product platform and its corresponding family of products in a single stage, which can yield improvements in the overall performance of the product family compared with two-stage approaches (the first stage involves determining the best settings for the platform variables and values of unique variables are found for each product in the second stage). The scope of the algorithm is also expanded by introducing a classification mechanism to allow mul- tiple platforms to be considered during product family optimization, offering opportunities for superior overall design by more efficacious tradeoffs between commonality and performance. The effectiveness of 2LCGA is demonstrated through the design of a family of universal electric motors and comparison against previous results.
基金supported by Indonesian Government(No.BPPLN DIKTI 3+1)
文摘An inverted pendulum is a sensitive system of highly coupled parameters, in laboratories, it is popular for modelling nonlinear systems such as mechanisms and control systems, and also for optimizing programmes before those programmes are applied in real situations. This study aims to find the optimum input setting for a double inverted pendulum(DIP), which requires an appropriate input to be able to stand and to achieve robust stability even when the system model is unknown. Such a DIP input could be widely applied in engineering fields for optimizing unknown systems with a limited budget. Previous studies have used various mathematical approaches to optimize settings for DIP, then have designed control algorithms or physical mathematical models.This study did not adopt a mathematical approach for the DIP controller because our DIP has five input parameters within its nondeterministic system model. This paper proposes a novel algorithm, named Uni Neuro, that integrates neural networks(NNs) and a uniform design(UD) in a model formed by input and response to the experimental data(metamodel). We employed a hybrid UD multiobjective genetic algorithm(HUDMOGA) for obtaining the optimized setting input parameters. The UD was also embedded in the HUDMOGA for enriching the solution set, whereas each chromosome used for crossover, mutation, and generation of the UD was determined through a selection procedure and derived individually. Subsequently, we combined the Euclidean distance and Pareto front to improve the performance of the algorithm. Finally, DIP equipment was used to confirm the settings. The proposed algorithm can produce 9 alternative configured input parameter values to swing-up then standing in robust stability of the DIP from only 25 training data items and 20 optimized simulation results. In comparison to the full factorial design, this design can save considerable experiment time because the metamodel can be formed by only 25 experiments using the UD. Furthermore, the proposed algorithm can be applied to nonlinear systems with multiple constraints.
基金This work was supported by the Serbian Ministry of Science and Education(project TR-32022)by companies Telekom Srbija and Informatika.
文摘Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies.
基金Supported by the Social Science Foundation of Shaanxi Province of China(2018P03)the Humanities and Social Sciences Research Youth Fund Project of Ministry of Education of China(13YJCZH251)
文摘In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied results; local adjustment cannot save the clustering result from poor local optima. If there is an anomaly in a cluster, it will seriously affect the cluster mean value. The K-means clustering algorithm is only suitable for clusters with convex shapes. We therefore propose a novel clustering algorithm CARDBK—"centroid all rank distance(CARD)" which means that all centroids are sorted by distance value from one point and "BK" are the initials of "batch K-means"—in which one point not only modifies a cluster centroid nearest to this point but also modifies multiple clusters centroids adjacent to this point, and the degree of influence of a point on a cluster centroid depends on the distance value between this point and the other nearer cluster centroids. Experimental results showed that our CARDBK algorithm outperformed other algorithms when tested on a number of different data sets based on the following performance indexes: entropy, purity, F1 value, Rand index and normalized mutual information(NMI). Our algorithm manifested to be more stable, linearly scalable and faster.