K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer ...K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer distribution of the biological sequence.Problems,such as similarity calculations or sequence assembly,can be described in the k-mer vector space.It helps us to identify new features of an old sequence-based problem in bioinformatics and develop new algorithms using the concepts and methods from linear space theory.In this study,we defined the k-mer vector space for the generalized biological sequences.The meaning of corresponding vector operations is explained in the biological context.We presented the vector/matrix form of several widely seen sequence-based problems,including read quantification,sequence assembly,and pattern detection problem.Its advantages and disadvantages are discussed.Also,we implement a tool for the sequence assembly problem based on the concepts of k-mer vector methods.It shows the practicability and convenience of this algorithm design strategy.展开更多
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of ...A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore,the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship,suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.展开更多
Canetti and Herzog have already proposed universally composable symbolic analysis(UCSA) to analyze mutual authentication and key exchange protocols. However,they do not analyze group key exchange protocol. Therefore,t...Canetti and Herzog have already proposed universally composable symbolic analysis(UCSA) to analyze mutual authentication and key exchange protocols. However,they do not analyze group key exchange protocol. Therefore,this paper explores an approach to analyze group key exchange protocols,which realize automation and guarantee the soundness of cryptography. Considered that there exist many kinds of group key exchange protocols and the participants’ number of each protocol is arbitrary. So this paper takes the case of Burmester-Desmedt(BD) protocol with three participants against passive adversary(3-BD-Passive) . In a nutshell,our works lay the root for analyzing group key exchange protocols automatically without sacrificing soundness of cryptography.展开更多
Aiming at characteristics of underground engineering,analyzed the feasibility of Multidisciplinary Design Optimization (MDO) used in underground engineering,and put forward a modularization-based MDO method and the id...Aiming at characteristics of underground engineering,analyzed the feasibility of Multidisciplinary Design Optimization (MDO) used in underground engineering,and put forward a modularization-based MDO method and the idea of MDO to resolve problems in stability analysis,proving the validity and feasibility of using MDO in underground engi- neering.Characteristics of uncertainty,complexity and nonlinear become bottle-neck to carry on underground engineering stability analysis by MDO.Therefore,the application of MDO in underground engineering stability analysis is still at a stage of exploration,which need some deep research.展开更多
As the idea of simulated annealing (SA) is introduced into the fitness function, an improved genetic algorithm (GA) is proposed to perform the optimal design of a pressure vessel which aims to attain the minimum weigh...As the idea of simulated annealing (SA) is introduced into the fitness function, an improved genetic algorithm (GA) is proposed to perform the optimal design of a pressure vessel which aims to attain the minimum weight under burst pressure con- straint. The actual burst pressure is calculated using the arc-length and restart analysis in finite element analysis (FEA). A penalty function in the fitness function is proposed to deal with the constrained problem. The effects of the population size and the number of generations in the GA on the weight and burst pressure of the vessel are explored. The optimization results using the proposed GA are also compared with those using the simple GA and the conventional Monte Carlo method.展开更多
Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorith...Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.展开更多
Based on the brief introduction of the principles of wavelet analysis, this paper gives a summary of several typical wavelet bases from the point of view of perfect reconstruction of signals and emphasizes that design...Based on the brief introduction of the principles of wavelet analysis, this paper gives a summary of several typical wavelet bases from the point of view of perfect reconstruction of signals and emphasizes that designing wavelet bases which are used to decompose the signal into a two-band form is equivalent to designing a two-band filter bank with perfect or nearly perfect property. The generating algorithm corresponding to Daubechies bases and some simulated results are also given in the paper.展开更多
Rolling element bearing is the most common machine element in rotating machinery.An extended life is among the foremost imperative standards in the optimal design of rolling element bearings,which confide on the fatig...Rolling element bearing is the most common machine element in rotating machinery.An extended life is among the foremost imperative standards in the optimal design of rolling element bearings,which confide on the fatigue failure,wear,and thermal conditions of bearings.To fill the gap,in the current work,all three objectives of a tapered roller bearing have been innovatively considered respectively,which are the dynamic capacity,elasto-hydrodynamic lubrication(EHL)minimum film⁃thickness,and maximum bearing temperature.These objective function formulations are presented,associated design variables are identified,and constraints are discussed.To solve complex non⁃linear constrained optimization formulations,a best⁃practice design procedure was investigated using the Artificial Bee Colony(ABC)algorithms.A sensitivity analysis of several geometric design variables was conducted to observe the difference in all three objectives.An excellent enhancement was found in the bearing designs that have been optimized as compared with bearing standards and previously published works.The present study will definitely add to the present experience based design followed in bearing industries to save time and obtain assessment of bearing performance before manufacturing.To verify the improvement,an experimental investigation is worthwhile conducting.展开更多
To obtain the optimal process parameters of stamping forming, finite element analysis and optimization technique were integrated via transforming multi-objective issue into a single-objective issue. A Pareto-based gen...To obtain the optimal process parameters of stamping forming, finite element analysis and optimization technique were integrated via transforming multi-objective issue into a single-objective issue. A Pareto-based genetic algorithm was applied to optimizing the head stamping forming process. In the proposed optimal model, fracture, wrinkle and thickness varying are a function of several factors, such as fillet radius, draw-bead position, blank size and blank-holding force. Hence, it is necessary to investigate the relationship between the objective functions and the variables in order to make objective functions varying minimized simultaneously. Firstly, the central composite experimental(CCD) with four factors and five levels was applied, and the experimental data based on the central composite experimental were acquired. Then, the response surface model(RSM) was set up and the results of the analysis of variance(ANOVA) show that it is reliable to predict the fracture, wrinkle and thickness varying functions by the response surface model. Finally, a Pareto-based genetic algorithm was used to find out a set of Pareto front, which makes fracture, wrinkle and thickness varying minimized integrally. A head stamping case indicates that the present method has higher precision and practicability compared with the "trial and error" procedure.展开更多
Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with s...Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies.展开更多
In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied result...In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied results; local adjustment cannot save the clustering result from poor local optima. If there is an anomaly in a cluster, it will seriously affect the cluster mean value. The K-means clustering algorithm is only suitable for clusters with convex shapes. We therefore propose a novel clustering algorithm CARDBK—"centroid all rank distance(CARD)" which means that all centroids are sorted by distance value from one point and "BK" are the initials of "batch K-means"—in which one point not only modifies a cluster centroid nearest to this point but also modifies multiple clusters centroids adjacent to this point, and the degree of influence of a point on a cluster centroid depends on the distance value between this point and the other nearer cluster centroids. Experimental results showed that our CARDBK algorithm outperformed other algorithms when tested on a number of different data sets based on the following performance indexes: entropy, purity, F1 value, Rand index and normalized mutual information(NMI). Our algorithm manifested to be more stable, linearly scalable and faster.展开更多
In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting....In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.展开更多
Previous studies show that interconnects occupy a large portion of the timing budget and area in FPGAs.In this work,we propose a time-multiplexing technique on FPGA interconnects.In order to fully exploit this interco...Previous studies show that interconnects occupy a large portion of the timing budget and area in FPGAs.In this work,we propose a time-multiplexing technique on FPGA interconnects.In order to fully exploit this interconnect architecture,we propose a time-multiplexed routing algorithm that can actively identify qualified nets and schedule them to multiplexable wires.We validate the algorithm by using the router to implement 20 benchmark circuits to time-multiplexed FPGAs.We achieve a 38%smaller minimum channel width and 3.8%smaller circuit critical path delay compared with the state-of-the-art architecture router when a wire can be time-multiplexed six times in a cycle.展开更多
Given a simple graph G with n vertices, m edges and k connected components. The spanning forest problem is to find a spanning tree for each connected component of G. This problem has applications to the electrical pow...Given a simple graph G with n vertices, m edges and k connected components. The spanning forest problem is to find a spanning tree for each connected component of G. This problem has applications to the electrical power demand problem, computer network design, circuit analysis, etc. In this paper, we present an?time parallel algorithm with processors for constructing a spanning forest on proper circle graph G on EREW PRAM.展开更多
Given a simple graph G with n vertices and m edges, the spanning tree problem is to find a spanning tree for a given graph G. This problem has many applications, such as electric power systems, computer network design...Given a simple graph G with n vertices and m edges, the spanning tree problem is to find a spanning tree for a given graph G. This problem has many applications, such as electric power systems, computer network design and circuit analysis. For a simple graph, the spanning tree problem can be solved in O(log n) time with O(m+n) processors on the CRCW PRAM. In general, it is known that more efficient parallel algorithms can be developed by restricting classes of graphs. In this paper, we shall propose a parallel algorithm which runs O(log n) time with O(n/log n) processors on the EREW PRAM for constructing on proper circle trapezoid graphs.展开更多
Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic nois...Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic noise level of measuring instruments.This leads to the need for multiple measurements with subsequent statistical processing.In this paper,the digital algorithms are proposed for the automatic measurement of the JJ parameters by IVC.These algorithms make it possible to implement multiple measurements and check these JJ parameters in an automatic mode with the required accuracy.The complete sufficient statistics are used to minimize the root-mean-square error of parameter measurement.A sequence of current pulses with slow rising and falling edges is used to drive JJ,and synchronous current and voltage readings at JJ are used to realize measurement algorithms.The algorithm performance is estimated through computer simulations.The significant advantage of the proposed algorithms is the independence from current source noise and intrinsic noise of current and voltage meters,as well as the simple implementation in automatic digital measuring systems.The proposed algorithms can be used to control JJ parameters during mass production of superconducting integrated circuits,which will improve the production efficiency and product quality.展开更多
In order to meet the requirement of network synthesis optimization design for a micro component, a three-level information frame and functional module based on web was proposed. Firstly, the finite element method (FE...In order to meet the requirement of network synthesis optimization design for a micro component, a three-level information frame and functional module based on web was proposed. Firstly, the finite element method (FEM) was used to analyze the dynamic property of coupled-energy-domain of virtual prototype instances and to obtain some optimal information data. Secondly, the rough set theory (RST) and the genetic algorithm (GA) were used to work out the reduction of attributes and the acquisition of principle of optimality and to confirm key variable and restriction condition in the synthesis optimization design. Finally, the regression analysis (RA) and GA were used to establish the synthesis optimization design model and carry on the optimization design. A corresponding prototype system was also developed and the synthesis optimization design of a thermal actuated micro-pump was carded out as a demonstration in this paper.展开更多
Recently, there is a growing interest in seismic qualification of ridges, buildings and mechanical equipment worldwide due to increase of accidents caused by earthquake. Severe earthquake can bring serious problems in...Recently, there is a growing interest in seismic qualification of ridges, buildings and mechanical equipment worldwide due to increase of accidents caused by earthquake. Severe earthquake can bring serious problems in the wind turbines and eventually lead to an interruption to their electric power supply. To overcome and prevent these undesirable problems, structural design optimization of a small vertical axis wind turbine has performed, in this study, for seismic qualification and lightweight by using a Genetic Algorithm (GA) subject to some design constraints such as the maximum stress limit, maximum deformation limit, and seismic acceleration gain limit. Also, the structural design optimizations were conducted for the four different initial design variable sets to confirm robustness of the optimization algorithm used. As a result, all the optimization results for the 4 different initial designs showed good agreement with each other properly. Thus the structural design optimization of a small vertical-axis wind turbine could be successfully accomplished.展开更多
The feedback vertex set (FVS) problem is to find the set of vertices of minimum cardinality whose removal renders the graph acyclic. The FVS problem has applications in several areas such as combinatorial circuit desi...The feedback vertex set (FVS) problem is to find the set of vertices of minimum cardinality whose removal renders the graph acyclic. The FVS problem has applications in several areas such as combinatorial circuit design, synchronous systems, computer systems, and very-large-scale integration (VLSI) circuits. The FVS problem is known to be NP-hard for simple graphs, but polynomi-al-time algorithms have been found for special classes of graphs. The intersection graph of a collection of arcs on a circle is called a circular-arc graph. A normal Helly circular-arc graph is a proper subclass of the set of circular-arc graphs. In this paper, we present an algorithm that takes time to solve the FVS problem in a normal Helly circular-arc graph with n vertices and m edges.展开更多
Recognizing the drawbacks of stand-alone computer-aided tools in engineering, several hybrid systems are suggested with varying degree of success. In transforming the design concept to a finished product, in particula...Recognizing the drawbacks of stand-alone computer-aided tools in engineering, several hybrid systems are suggested with varying degree of success. In transforming the design concept to a finished product, in particular, smooth interfacing of the design data is crucial to reduce product cost and time to market. Having a product model that contains the complete product description and computer-aided tools that can understand each other are the primary requirements to achieve the interfacing goal. This article discusses the development methodology of hybrid engineering software systems with particular focus on application of soft computing tools such as genetic algorithms and neural networks. Forms of hybridization options are discussed and the applications are elaborated using two case studies. The forefront aims to develop hybrid systems that combine the strong side of each tool, such as, the learning, pattern recognition and classification power of neural networks with the powerful capacity of genetic algorithms in global search and optimization. While most optimization tasks need a certain form of model, there are many processes in the mechanical engineering field that are difficult to model using conventional modeling techniques. The proposed hybrid system solves such difficult-to-model processes and contributes to the effort of smooth interfacing design data to other downstream processes.展开更多
基金the National Natural Science Foundation of China(11771393,11632015)the Natural Sci-ence Foundation of Zhejiang Province,China(LZ14A010002).
文摘K-mer can be used for the description of biological sequences and k-mer distribution is a tool for solving sequences analysis problems in bioinformatics.We can use k-mer vector as a representation method of the k-mer distribution of the biological sequence.Problems,such as similarity calculations or sequence assembly,can be described in the k-mer vector space.It helps us to identify new features of an old sequence-based problem in bioinformatics and develop new algorithms using the concepts and methods from linear space theory.In this study,we defined the k-mer vector space for the generalized biological sequences.The meaning of corresponding vector operations is explained in the biological context.We presented the vector/matrix form of several widely seen sequence-based problems,including read quantification,sequence assembly,and pattern detection problem.Its advantages and disadvantages are discussed.Also,we implement a tool for the sequence assembly problem based on the concepts of k-mer vector methods.It shows the practicability and convenience of this algorithm design strategy.
基金Supported by the Project of Ministry of Education and Finance(No.200512)the Project of the State Key Laboratory of ocean engineering(GKZD010053-10)
文摘A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore,the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship,suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
基金supported by National Natural Science Foundation of China No.61003262,National Natural Science Foundation of China No.60873237Doctoral Fund of Ministry of Education of China No.20070007071
文摘Canetti and Herzog have already proposed universally composable symbolic analysis(UCSA) to analyze mutual authentication and key exchange protocols. However,they do not analyze group key exchange protocol. Therefore,this paper explores an approach to analyze group key exchange protocols,which realize automation and guarantee the soundness of cryptography. Considered that there exist many kinds of group key exchange protocols and the participants’ number of each protocol is arbitrary. So this paper takes the case of Burmester-Desmedt(BD) protocol with three participants against passive adversary(3-BD-Passive) . In a nutshell,our works lay the root for analyzing group key exchange protocols automatically without sacrificing soundness of cryptography.
基金the 11th National Science and Technology Supporting Program of China(2006BAB02A02)
文摘Aiming at characteristics of underground engineering,analyzed the feasibility of Multidisciplinary Design Optimization (MDO) used in underground engineering,and put forward a modularization-based MDO method and the idea of MDO to resolve problems in stability analysis,proving the validity and feasibility of using MDO in underground engi- neering.Characteristics of uncertainty,complexity and nonlinear become bottle-neck to carry on underground engineering stability analysis by MDO.Therefore,the application of MDO in underground engineering stability analysis is still at a stage of exploration,which need some deep research.
基金Project (Nos. 2006BAK04A02-02 and 2006BAK02B02-08) sup-ported by the National Key Technology R&D Program, China
文摘As the idea of simulated annealing (SA) is introduced into the fitness function, an improved genetic algorithm (GA) is proposed to perform the optimal design of a pressure vessel which aims to attain the minimum weight under burst pressure con- straint. The actual burst pressure is calculated using the arc-length and restart analysis in finite element analysis (FEA). A penalty function in the fitness function is proposed to deal with the constrained problem. The effects of the population size and the number of generations in the GA on the weight and burst pressure of the vessel are explored. The optimization results using the proposed GA are also compared with those using the simple GA and the conventional Monte Carlo method.
文摘Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.
文摘Based on the brief introduction of the principles of wavelet analysis, this paper gives a summary of several typical wavelet bases from the point of view of perfect reconstruction of signals and emphasizes that designing wavelet bases which are used to decompose the signal into a two-band form is equivalent to designing a two-band filter bank with perfect or nearly perfect property. The generating algorithm corresponding to Daubechies bases and some simulated results are also given in the paper.
文摘Rolling element bearing is the most common machine element in rotating machinery.An extended life is among the foremost imperative standards in the optimal design of rolling element bearings,which confide on the fatigue failure,wear,and thermal conditions of bearings.To fill the gap,in the current work,all three objectives of a tapered roller bearing have been innovatively considered respectively,which are the dynamic capacity,elasto-hydrodynamic lubrication(EHL)minimum film⁃thickness,and maximum bearing temperature.These objective function formulations are presented,associated design variables are identified,and constraints are discussed.To solve complex non⁃linear constrained optimization formulations,a best⁃practice design procedure was investigated using the Artificial Bee Colony(ABC)algorithms.A sensitivity analysis of several geometric design variables was conducted to observe the difference in all three objectives.An excellent enhancement was found in the bearing designs that have been optimized as compared with bearing standards and previously published works.The present study will definitely add to the present experience based design followed in bearing industries to save time and obtain assessment of bearing performance before manufacturing.To verify the improvement,an experimental investigation is worthwhile conducting.
基金Project(2012ZX04010-081) supported by the National Science and Technology Major Project of the Ministry of Science and Technology of China
文摘To obtain the optimal process parameters of stamping forming, finite element analysis and optimization technique were integrated via transforming multi-objective issue into a single-objective issue. A Pareto-based genetic algorithm was applied to optimizing the head stamping forming process. In the proposed optimal model, fracture, wrinkle and thickness varying are a function of several factors, such as fillet radius, draw-bead position, blank size and blank-holding force. Hence, it is necessary to investigate the relationship between the objective functions and the variables in order to make objective functions varying minimized simultaneously. Firstly, the central composite experimental(CCD) with four factors and five levels was applied, and the experimental data based on the central composite experimental were acquired. Then, the response surface model(RSM) was set up and the results of the analysis of variance(ANOVA) show that it is reliable to predict the fracture, wrinkle and thickness varying functions by the response surface model. Finally, a Pareto-based genetic algorithm was used to find out a set of Pareto front, which makes fracture, wrinkle and thickness varying minimized integrally. A head stamping case indicates that the present method has higher precision and practicability compared with the "trial and error" procedure.
基金This work was supported by the Serbian Ministry of Science and Education(project TR-32022)by companies Telekom Srbija and Informatika.
文摘Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies.
基金Supported by the Social Science Foundation of Shaanxi Province of China(2018P03)the Humanities and Social Sciences Research Youth Fund Project of Ministry of Education of China(13YJCZH251)
文摘In the K-means clustering algorithm, each data point is uniquely placed into one category. The clustering quality is heavily dependent on the initial cluster centroid. Different initializations can yield varied results; local adjustment cannot save the clustering result from poor local optima. If there is an anomaly in a cluster, it will seriously affect the cluster mean value. The K-means clustering algorithm is only suitable for clusters with convex shapes. We therefore propose a novel clustering algorithm CARDBK—"centroid all rank distance(CARD)" which means that all centroids are sorted by distance value from one point and "BK" are the initials of "batch K-means"—in which one point not only modifies a cluster centroid nearest to this point but also modifies multiple clusters centroids adjacent to this point, and the degree of influence of a point on a cluster centroid depends on the distance value between this point and the other nearer cluster centroids. Experimental results showed that our CARDBK algorithm outperformed other algorithms when tested on a number of different data sets based on the following performance indexes: entropy, purity, F1 value, Rand index and normalized mutual information(NMI). Our algorithm manifested to be more stable, linearly scalable and faster.
文摘In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.
文摘Previous studies show that interconnects occupy a large portion of the timing budget and area in FPGAs.In this work,we propose a time-multiplexing technique on FPGA interconnects.In order to fully exploit this interconnect architecture,we propose a time-multiplexed routing algorithm that can actively identify qualified nets and schedule them to multiplexable wires.We validate the algorithm by using the router to implement 20 benchmark circuits to time-multiplexed FPGAs.We achieve a 38%smaller minimum channel width and 3.8%smaller circuit critical path delay compared with the state-of-the-art architecture router when a wire can be time-multiplexed six times in a cycle.
文摘Given a simple graph G with n vertices, m edges and k connected components. The spanning forest problem is to find a spanning tree for each connected component of G. This problem has applications to the electrical power demand problem, computer network design, circuit analysis, etc. In this paper, we present an?time parallel algorithm with processors for constructing a spanning forest on proper circle graph G on EREW PRAM.
文摘Given a simple graph G with n vertices and m edges, the spanning tree problem is to find a spanning tree for a given graph G. This problem has many applications, such as electric power systems, computer network design and circuit analysis. For a simple graph, the spanning tree problem can be solved in O(log n) time with O(m+n) processors on the CRCW PRAM. In general, it is known that more efficient parallel algorithms can be developed by restricting classes of graphs. In this paper, we shall propose a parallel algorithm which runs O(log n) time with O(n/log n) processors on the EREW PRAM for constructing on proper circle trapezoid graphs.
基金the Ministry of Science and Higher Education of the Russian Federation under Grant No.FSUN-2023-0007.
文摘Some electrical parameters of the SIS-type hysteretic underdamped Josephson junction(JJ)can be measured by its current-voltage characteristics(IVCs).Currents and voltages at JJ are commensurate with the intrinsic noise level of measuring instruments.This leads to the need for multiple measurements with subsequent statistical processing.In this paper,the digital algorithms are proposed for the automatic measurement of the JJ parameters by IVC.These algorithms make it possible to implement multiple measurements and check these JJ parameters in an automatic mode with the required accuracy.The complete sufficient statistics are used to minimize the root-mean-square error of parameter measurement.A sequence of current pulses with slow rising and falling edges is used to drive JJ,and synchronous current and voltage readings at JJ are used to realize measurement algorithms.The algorithm performance is estimated through computer simulations.The significant advantage of the proposed algorithms is the independence from current source noise and intrinsic noise of current and voltage meters,as well as the simple implementation in automatic digital measuring systems.The proposed algorithms can be used to control JJ parameters during mass production of superconducting integrated circuits,which will improve the production efficiency and product quality.
基金Projects 50375118,5014006 supported by the National Natural Science Foundation of China
文摘In order to meet the requirement of network synthesis optimization design for a micro component, a three-level information frame and functional module based on web was proposed. Firstly, the finite element method (FEM) was used to analyze the dynamic property of coupled-energy-domain of virtual prototype instances and to obtain some optimal information data. Secondly, the rough set theory (RST) and the genetic algorithm (GA) were used to work out the reduction of attributes and the acquisition of principle of optimality and to confirm key variable and restriction condition in the synthesis optimization design. Finally, the regression analysis (RA) and GA were used to establish the synthesis optimization design model and carry on the optimization design. A corresponding prototype system was also developed and the synthesis optimization design of a thermal actuated micro-pump was carded out as a demonstration in this paper.
文摘Recently, there is a growing interest in seismic qualification of ridges, buildings and mechanical equipment worldwide due to increase of accidents caused by earthquake. Severe earthquake can bring serious problems in the wind turbines and eventually lead to an interruption to their electric power supply. To overcome and prevent these undesirable problems, structural design optimization of a small vertical axis wind turbine has performed, in this study, for seismic qualification and lightweight by using a Genetic Algorithm (GA) subject to some design constraints such as the maximum stress limit, maximum deformation limit, and seismic acceleration gain limit. Also, the structural design optimizations were conducted for the four different initial design variable sets to confirm robustness of the optimization algorithm used. As a result, all the optimization results for the 4 different initial designs showed good agreement with each other properly. Thus the structural design optimization of a small vertical-axis wind turbine could be successfully accomplished.
文摘The feedback vertex set (FVS) problem is to find the set of vertices of minimum cardinality whose removal renders the graph acyclic. The FVS problem has applications in several areas such as combinatorial circuit design, synchronous systems, computer systems, and very-large-scale integration (VLSI) circuits. The FVS problem is known to be NP-hard for simple graphs, but polynomi-al-time algorithms have been found for special classes of graphs. The intersection graph of a collection of arcs on a circle is called a circular-arc graph. A normal Helly circular-arc graph is a proper subclass of the set of circular-arc graphs. In this paper, we present an algorithm that takes time to solve the FVS problem in a normal Helly circular-arc graph with n vertices and m edges.
文摘Recognizing the drawbacks of stand-alone computer-aided tools in engineering, several hybrid systems are suggested with varying degree of success. In transforming the design concept to a finished product, in particular, smooth interfacing of the design data is crucial to reduce product cost and time to market. Having a product model that contains the complete product description and computer-aided tools that can understand each other are the primary requirements to achieve the interfacing goal. This article discusses the development methodology of hybrid engineering software systems with particular focus on application of soft computing tools such as genetic algorithms and neural networks. Forms of hybridization options are discussed and the applications are elaborated using two case studies. The forefront aims to develop hybrid systems that combine the strong side of each tool, such as, the learning, pattern recognition and classification power of neural networks with the powerful capacity of genetic algorithms in global search and optimization. While most optimization tasks need a certain form of model, there are many processes in the mechanical engineering field that are difficult to model using conventional modeling techniques. The proposed hybrid system solves such difficult-to-model processes and contributes to the effort of smooth interfacing design data to other downstream processes.