In this paper, a new probabilistic analytical approach, the minimal cut-based recursive decomposition algorithm (MCRDA), is presented to evaluate the seismic reliability of large-scale lifeline systems. Based on the...In this paper, a new probabilistic analytical approach, the minimal cut-based recursive decomposition algorithm (MCRDA), is presented to evaluate the seismic reliability of large-scale lifeline systems. Based on the minimal cut searching algorithm, the approach calculates the disjoint minimal cuts one by one using the basic procedure of the recursive decomposition method. At the same time, the process obtains the disjoint minimal paths of the system. In order to improve the computation efficiency, probabilistic inequality is used to calculate a solution that satisfies the prescribed error bound. A series of case studies show that MCRDA converges rapidly when the edges of the systems have low reliabilities. Therefore, the approach can be used to evaluate large-scale lifeline systems subjected to strong seismic wave excitation.展开更多
In this paper,an improved cut-based recursive decomposition algorithm is proposed for lifeline networks.First,a complementary structural function is established and three theorems are presented as a premise of the pro...In this paper,an improved cut-based recursive decomposition algorithm is proposed for lifeline networks.First,a complementary structural function is established and three theorems are presented as a premise of the proposed algorithm.Taking the minimal cut of a network as decomposition policy,the proposed algorithm constructs a recursive decomposition process.During the decomposition,both the disjoint minimal cut set and the disjoint minimal path set are simultaneously enumerated.Therefore,in addition to obtaining an accurate value after decomposing all disjoint minimal cuts and disjoint minimal paths,the algorithm provides approximate results which satisfy a prescribed error bound using a probabilistic inequality.Two example networks,including a large urban gas system,are analyzed using the proposed algorithm.Meanwhile,a part of the results are compared with the results obtained by a path-based recursive decomposition algorithm.These results show that the proposed algorithm provides a useful probabilistic analysis method for the reliability evaluation of lifeline networks and may be more suitable for networks where the edges have low reliabilities.展开更多
The seismic reliability evaluation of lifeline networks has received considerable attention and been widely studied. In this paper, on the basis of an original recursive decomposition algorithm, an improved analytical...The seismic reliability evaluation of lifeline networks has received considerable attention and been widely studied. In this paper, on the basis of an original recursive decomposition algorithm, an improved analytical approach to evaluate the seismic reliability of large lifeline systems is presented. The proposed algorithm takes the shortest path from the source to the sink of a network as decomposition policy. Using the Boolean laws of set operation and the probabilistic operation principal, a recursive decomposition process is constructed in which the disjoint minimal path set and the disjoint minimal cut set are simultaneously enumerated. As the result, a probabilistic inequality can be used to provide results that satisfy a prescribed error bound. During the decomposition process, different from the original recursive decomposition algorithm which only removes edges to simplify the network, the proposed algorithm simplifies the network by merging nodes into sources and removing edges. As a result, the proposed algorithm can obtain simpler networks. Moreover, for a network owning s-independent components in its component set, two network reduction techniques are introduced to speed up the proposed algorithm. A series of case studies, including an actual water distribution network and a large urban gas system, are calculated using the proposed algorithm. The results indicate that the proposed algorithm provides a useful probabilistic analysis method for the seismic reliability evaluation of lifeline networks.展开更多
The outer-product decomposition algorithm(OPDA)performs well at blindly identifying system function.However,the direct use of the OPDA in systems using bandpass source will lead to errors.This study proposes an approa...The outer-product decomposition algorithm(OPDA)performs well at blindly identifying system function.However,the direct use of the OPDA in systems using bandpass source will lead to errors.This study proposes an approach to enhance the channel estimation quality of a bandpass source that uses OPDA.This approach performs frequency domain transformation on the received signal and obtains the optimal transformation parameter by minimizing the p-norm of an error matrix.Moreover,the proposed approach extends the application of OPDA from a white source to a bandpass white source or chirp signal.Theoretical formulas and simulation results show that the proposed approach not only reduces the estimation error but also accelerates the algorithm in a bandpass system,thus being highly feasible in practical blind system identification applications.展开更多
This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type. To solve the nonlinear weighted average finite diffe...This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type. To solve the nonlinear weighted average finite difference scheme for the partial differential equation, we construct a monotone domain decomposition algorithm based on a Schwarz alternating method and a box-domain decomposition. This algorithm needs only to solve linear discrete systems at each iterative step and converges monotonically to the exact solution of the nonlinear discrete problem. domain decomposition algorithm is estimated The rate of convergence of the monotone Numerical experiments are presented.展开更多
In order to improve measurement accuracy of moving target signals, an automatic target recognition model of moving target signals was established based on empirical mode decomposition(EMD) and support vector machine(S...In order to improve measurement accuracy of moving target signals, an automatic target recognition model of moving target signals was established based on empirical mode decomposition(EMD) and support vector machine(SVM). Automatic target recognition process on the nonlinear and non-stationary of Doppler signals of military target by using automatic target recognition model can be expressed as follows. Firstly, the nonlinearity and non-stationary of Doppler signals were decomposed into a set of intrinsic mode functions(IMFs) using EMD. After the Hilbert transform of IMF, the energy ratio of each IMF to the total IMFs can be extracted as the features of military target. Then, the SVM was trained through using the energy ratio to classify the military targets, and genetic algorithm(GA) was used to optimize SVM parameters in the solution space. The experimental results show that this algorithm can achieve the recognition accuracies of 86.15%, 87.93%, and 82.28% for tank, vehicle and soldier, respectively.展开更多
A new algorithm, named segmented second empirical mode decomposition (EMD) algorithm, is proposed in this paper in order to reduce the computing time of EMD and make EMD algorithm available to online time-frequency ...A new algorithm, named segmented second empirical mode decomposition (EMD) algorithm, is proposed in this paper in order to reduce the computing time of EMD and make EMD algorithm available to online time-frequency analysis. The original data is divided into some segments with the same length. Each segment data is processed based on the principle of the first-level EMD decomposition. The algorithm is compared with the traditional EMD and results show that it is more useful and effective for analyzing nonlinear and non-stationary signals.展开更多
In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue reso...In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.展开更多
Schwarz methods are an important type of domain decomposition methods. Using the Fourier transform, we derive error propagation matrices and their spectral radii of the classical Schwarz alternating method and the add...Schwarz methods are an important type of domain decomposition methods. Using the Fourier transform, we derive error propagation matrices and their spectral radii of the classical Schwarz alternating method and the additive Schwarz method for the biharmonic equation in this paper. We prove the convergence of the Schwarz methods from a new point of view, and provide detailed information about the convergence speeds and their dependence on the overlapping size of subdomains. The obtained results are independent of any unknown constant and discretization method, showing that the Schwarz alternating method converges twice as quickly as the additive Schwarz method.展开更多
A Laplace decomposition algorithm is adopted to investigate numerical solutions of a class of nonlinear partial differential equations with nonlinear term of any order, utt + auxx + bu + cup + du^2p-1 = 0, which c...A Laplace decomposition algorithm is adopted to investigate numerical solutions of a class of nonlinear partial differential equations with nonlinear term of any order, utt + auxx + bu + cup + du^2p-1 = 0, which contains some important equations of mathematical physics. Three distinct initial conditions are constructed and generalized numerical solutions are thereby obtained, including numerical hyperbolic function solutions and doubly periodic ones. Illustrative figures and comparisons between the numerical and exact solutions with different values of p are used to test the efficiency of the proposed method, which shows good results are azhieved.展开更多
A novel overlapping domain decomposition splitting algorithm based on a CrankNicolson method is developed for the stochastic nonlinear Schrödinger equation driven by a multiplicative noise with non-periodic bound...A novel overlapping domain decomposition splitting algorithm based on a CrankNicolson method is developed for the stochastic nonlinear Schrödinger equation driven by a multiplicative noise with non-periodic boundary conditions.The proposed algorithm can significantly reduce the computational cost while maintaining the similar conservation laws.Numerical experiments are dedicated to illustrating the capability of the algorithm for different spatial dimensions,as well as the various initial conditions.In particular,we compare the performance of the overlapping domain decomposition splitting algorithm with the stochastic multi-symplectic method in[S.Jiang et al.,Commun.Comput.Phys.,14(2013),393-411]and the finite difference splitting scheme in[J.Cui et al.,J.Differ.Equ.,266(2019),5625-5663].We observe that our proposed algorithm has excellent computational efficiency and is highly competitive.It provides a useful tool for solving stochastic partial differential equations.展开更多
In order to solve the flexible job shop scheduling problem with variable batches,we propose an improved multiobjective optimization algorithm,which combines the idea of inverse scheduling.First,a flexible job shop pro...In order to solve the flexible job shop scheduling problem with variable batches,we propose an improved multiobjective optimization algorithm,which combines the idea of inverse scheduling.First,a flexible job shop problem with the variable batches scheduling model is formulated.Second,we propose a batch optimization algorithm with inverse scheduling in which the batch size is adjusted by the dynamic feedback batch adjusting method.Moreover,in order to increase the diversity of the population,two methods are developed.One is the threshold to control the neighborhood updating,and the other is the dynamic clustering algorithm to update the population.Finally,a group of experiments are carried out.The results show that the improved multi-objective optimization algorithm can ensure the diversity of Pareto solutions effectively,and has effective performance in solving the flexible job shop scheduling problem with variable batches.展开更多
For photovoltaic power prediction,a kind of sparse representation modeling method using feature extraction techniques is proposed.Firstly,all these factors affecting the photovoltaic power output are regarded as the i...For photovoltaic power prediction,a kind of sparse representation modeling method using feature extraction techniques is proposed.Firstly,all these factors affecting the photovoltaic power output are regarded as the input data of the model.Next,the dictionary learning techniques using the K-mean singular value decomposition(K-SVD)algorithm and the orthogonal matching pursuit(OMP)algorithm are used to obtain the corresponding sparse encoding based on all the input data,i.e.the initial dictionary.Then,to build the global prediction model,the sparse coding vectors are used as the input of the model of the kernel extreme learning machine(KELM).Finally,to verify the effectiveness of the combined K-SVD-OMP and KELM method,the proposed method is applied to a instance of the photovoltaic power prediction.Compared with KELM,SVM and ELM under the same conditions,experimental results show that different combined sparse representation methods achieve better prediction results,among which the combined K-SVD-OMP and KELM method shows better prediction results and modeling accuracy.展开更多
A new method in digital hearing aids to adaptively localize the speech source in noise and reverberant environment is proposed. Based on the room reverberant model and the multichannel adaptive eigenvalue decompositi...A new method in digital hearing aids to adaptively localize the speech source in noise and reverberant environment is proposed. Based on the room reverberant model and the multichannel adaptive eigenvalue decomposition (MCAED) algorithm, the proposed method can iteratively estimate impulse response coefficients between the speech source and microphones by the adaptive subgradient projection method. Then, it acquires the time delays of microphone pairs, and calculates the source position by the geometric method. Compared with the traditional normal least mean square (NLMS) algorithm, the adaptive subgradient projection method achieves faster and more accurate convergence in a low signal-to-noise ratio (SNR) environment. Simulations for glasses digital hearing aids with four-component square array demonstrate the robust performance of the proposed method.展开更多
This article presents a systematic research methodology of modular design for conceptual auto body frame by hybrid optimization method.A modified graph-based decomposition optimization algorithm is utilized to generat...This article presents a systematic research methodology of modular design for conceptual auto body frame by hybrid optimization method.A modified graph-based decomposition optimization algorithm is utilized to generate an optimal BIW assembly topo model composed of“potential modules”.The consistency constraint function in collaborative optimization is extended to maximize the commonality of modules and minimize the performance loss of all car types in the same product family simultaneously.A novel screening method is employed to select both“basic structures”and“reinforcement”modules based on the dimension optimization of the manufacturing elements and the optimal assembly mode;this allows for a more exhaustive modular platform design in contrast with existing methods.The proposed methodology is applied to a case study for the modular design of three conceptual auto body types in the same platform to validate its feasibility and effectiveness.展开更多
In order to facilitate the scientific management of large-sized shipping companies, fleet planning under complicated circumstances has been studied. Based on multiple influencing factors such as the techno-economic st...In order to facilitate the scientific management of large-sized shipping companies, fleet planning under complicated circumstances has been studied. Based on multiple influencing factors such as the techno-economic status of ships, the investment capacity of company, the possible purchase of new ships, the buying/selling of second-hand vessels and the chartering/renting of ships, a mixed-integer programming model for fleet planning has been established. A large-sized shipping company is utilized to make an empirical study, and Benders decomposition algorithm is employed to test the applicability of the proposed model. The result shows that the model is capable for multi-route, multi-ship and large-scaled fleet planning and thus helpful to support the decision making of large-sized shipping companies.展开更多
It is a non-polynomial complexity problem to calculate connectivity of the complex network. When the system reliability cannot be expressed as a function of element reliability, we have to apply some heuristic methods...It is a non-polynomial complexity problem to calculate connectivity of the complex network. When the system reliability cannot be expressed as a function of element reliability, we have to apply some heuristic methods for optimization based on connectivity of the network. The calculation structure of connectivity of complex network is analyzed in the paper. The coefficient matrixes of Taylor second order expansion of the system connectivity is generated based on the calculation structure of connectivity of complex network. An optimal schedule is achieved based on genetic algorithms (GA). Fitness of seeds is calculated using the Taylor expansion function of system connectivity. Precise connectivity of the optimal schedule and the Taylor expansion function of system connectivity can be achieved by the approved Minty method or the recursive decomposition algorithm. When error between approximate connectivity and the precise value exceeds the assigned value, the optimization process is continued using GA, and the Taylor function of system connectivity needs to be renewed. The optimization process is called iterative GA. Iterative GA can be used in the large network for optimal reliability attribution. One temporary optimal result will be generated every time in the iteration process. These temporary optimal results approach the real optimal results. They can be regarded as a group of approximate optimal results useful in the real project.展开更多
A critical component of dealing with heart disease is real-time identifi-cation,which triggers rapid action.The main challenge of real-time identification is illustrated here by the rare occurrence of cardiac arrhythm...A critical component of dealing with heart disease is real-time identifi-cation,which triggers rapid action.The main challenge of real-time identification is illustrated here by the rare occurrence of cardiac arrhythmias.Recent contribu-tions to cardiac arrhythmia prediction using supervised learning approaches gen-erally involve the use of demographic features(electronic health records),signal features(electrocardiogram features as signals),and temporal features.Since the signal of the electrical activity of the heartbeat is very sensitive to differences between high and low heartbeats,it is possible to detect some of the irregularities in the early stages of arrhythmia.This paper describes the training of supervised learning using features obtained from electrocardiogram(ECG)image to correct the limitations of arrhythmia prediction by using demographic and electrocardio-graphic signal features.An experimental study demonstrates the usefulness of the proposed Arrhythmia Prediction by Supervised Learning(APSL)method,whose features are obtained from the image formats of the electrocardiograms used as input.展开更多
Conducting reasonable weapon-target assignment( WTA) with near real time can bring the maximum awards with minimum costs which are especially significant in the modern war. A framework of dynamic WTA( DWTA) model base...Conducting reasonable weapon-target assignment( WTA) with near real time can bring the maximum awards with minimum costs which are especially significant in the modern war. A framework of dynamic WTA( DWTA) model based on a series of staged static WTA( SWTA) models is established where dynamic factors including time window of target and time window of weapon are considered in the staged SWTA model. Then,a hybrid algorithm for the staged SWTA named Decomposition-Based Dynamic Weapon-target Assignment( DDWTA) is proposed which is based on the framework of multi-objective evolutionary algorithm based on decomposition( MOEA / D) with two major improvements: one is the coding based on constraint of resource to generate the feasible solutions, and the other is the tabu search strategy to speed up the convergence.Comparative experiments prove that the proposed algorithm is capable of obtaining a well-converged and well diversified set of solutions on a problem instance and meets the time demand in the battlefield environment.展开更多
Cache-enabled small cell networks have been regarded as a promising approach for network operators to cope with the explosive data traffic growth in future 5 G networks. However, the user association and resource allo...Cache-enabled small cell networks have been regarded as a promising approach for network operators to cope with the explosive data traffic growth in future 5 G networks. However, the user association and resource allocation mechanism has not been thoroughly studied under given content placement situation. In this paper, we formulate the joint optimization problem of user association and resource allocation as a mixed integer nonlinear programming(MINLP) problem aiming at deriving a balance between the total utility of data rates and the total data rates retrieved from caches. To solve this problem, we propose a distributed relaxing-rounding method. Simulation results demonstrate that the distributed relaxing-rounding method outperforms traditional max-SINR method and range-expansion method in terms of both total utility of data rates and total data rates retrieved from caches in practical scenarios. In addition, effects of storage and backhaul capacities on the performance are also studied.展开更多
基金the Natural Science Fundation of China for the Innovative Research Group of China Under Grant No. 50621062
文摘In this paper, a new probabilistic analytical approach, the minimal cut-based recursive decomposition algorithm (MCRDA), is presented to evaluate the seismic reliability of large-scale lifeline systems. Based on the minimal cut searching algorithm, the approach calculates the disjoint minimal cuts one by one using the basic procedure of the recursive decomposition method. At the same time, the process obtains the disjoint minimal paths of the system. In order to improve the computation efficiency, probabilistic inequality is used to calculate a solution that satisfies the prescribed error bound. A series of case studies show that MCRDA converges rapidly when the edges of the systems have low reliabilities. Therefore, the approach can be used to evaluate large-scale lifeline systems subjected to strong seismic wave excitation.
基金Ministry of Science and Technology of China Under Grant No.SLDRCE09-B-12Natural Science Funds for Young Scholars of China Under Grant No.50808144
文摘In this paper,an improved cut-based recursive decomposition algorithm is proposed for lifeline networks.First,a complementary structural function is established and three theorems are presented as a premise of the proposed algorithm.Taking the minimal cut of a network as decomposition policy,the proposed algorithm constructs a recursive decomposition process.During the decomposition,both the disjoint minimal cut set and the disjoint minimal path set are simultaneously enumerated.Therefore,in addition to obtaining an accurate value after decomposing all disjoint minimal cuts and disjoint minimal paths,the algorithm provides approximate results which satisfy a prescribed error bound using a probabilistic inequality.Two example networks,including a large urban gas system,are analyzed using the proposed algorithm.Meanwhile,a part of the results are compared with the results obtained by a path-based recursive decomposition algorithm.These results show that the proposed algorithm provides a useful probabilistic analysis method for the reliability evaluation of lifeline networks and may be more suitable for networks where the edges have low reliabilities.
基金Natural Science Funds for the Innovative Research Group of China Under Grant No.50621062
文摘The seismic reliability evaluation of lifeline networks has received considerable attention and been widely studied. In this paper, on the basis of an original recursive decomposition algorithm, an improved analytical approach to evaluate the seismic reliability of large lifeline systems is presented. The proposed algorithm takes the shortest path from the source to the sink of a network as decomposition policy. Using the Boolean laws of set operation and the probabilistic operation principal, a recursive decomposition process is constructed in which the disjoint minimal path set and the disjoint minimal cut set are simultaneously enumerated. As the result, a probabilistic inequality can be used to provide results that satisfy a prescribed error bound. During the decomposition process, different from the original recursive decomposition algorithm which only removes edges to simplify the network, the proposed algorithm simplifies the network by merging nodes into sources and removing edges. As a result, the proposed algorithm can obtain simpler networks. Moreover, for a network owning s-independent components in its component set, two network reduction techniques are introduced to speed up the proposed algorithm. A series of case studies, including an actual water distribution network and a large urban gas system, are calculated using the proposed algorithm. The results indicate that the proposed algorithm provides a useful probabilistic analysis method for the seismic reliability evaluation of lifeline networks.
基金This study is supported by the Natural Science Foundation of China(NSFC)under Grant Nos.11774073 and 51279033.
文摘The outer-product decomposition algorithm(OPDA)performs well at blindly identifying system function.However,the direct use of the OPDA in systems using bandpass source will lead to errors.This study proposes an approach to enhance the channel estimation quality of a bandpass source that uses OPDA.This approach performs frequency domain transformation on the received signal and obtains the optimal transformation parameter by minimizing the p-norm of an error matrix.Moreover,the proposed approach extends the application of OPDA from a white source to a bandpass white source or chirp signal.Theoretical formulas and simulation results show that the proposed approach not only reduces the estimation error but also accelerates the algorithm in a bandpass system,thus being highly feasible in practical blind system identification applications.
文摘This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type. To solve the nonlinear weighted average finite difference scheme for the partial differential equation, we construct a monotone domain decomposition algorithm based on a Schwarz alternating method and a box-domain decomposition. This algorithm needs only to solve linear discrete systems at each iterative step and converges monotonically to the exact solution of the nonlinear discrete problem. domain decomposition algorithm is estimated The rate of convergence of the monotone Numerical experiments are presented.
基金Projects(61471370,61401479)supported by the National Natural Science Foundation of China
文摘In order to improve measurement accuracy of moving target signals, an automatic target recognition model of moving target signals was established based on empirical mode decomposition(EMD) and support vector machine(SVM). Automatic target recognition process on the nonlinear and non-stationary of Doppler signals of military target by using automatic target recognition model can be expressed as follows. Firstly, the nonlinearity and non-stationary of Doppler signals were decomposed into a set of intrinsic mode functions(IMFs) using EMD. After the Hilbert transform of IMF, the energy ratio of each IMF to the total IMFs can be extracted as the features of military target. Then, the SVM was trained through using the energy ratio to classify the military targets, and genetic algorithm(GA) was used to optimize SVM parameters in the solution space. The experimental results show that this algorithm can achieve the recognition accuracies of 86.15%, 87.93%, and 82.28% for tank, vehicle and soldier, respectively.
文摘A new algorithm, named segmented second empirical mode decomposition (EMD) algorithm, is proposed in this paper in order to reduce the computing time of EMD and make EMD algorithm available to online time-frequency analysis. The original data is divided into some segments with the same length. Each segment data is processed based on the principle of the first-level EMD decomposition. The algorithm is compared with the traditional EMD and results show that it is more useful and effective for analyzing nonlinear and non-stationary signals.
基金The National Natural Science Foundation of China (No.50422283)the Science and Technology Key Plan Project of Henan Province (No.072102360060)
文摘In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.
基金supported by the National Natural Science Foundation of China (No. 10671154)the Na-tional Basic Research Program (No. 2005CB321703)the Science and Technology Foundation of Guizhou Province of China (No. [2008]2123)
文摘Schwarz methods are an important type of domain decomposition methods. Using the Fourier transform, we derive error propagation matrices and their spectral radii of the classical Schwarz alternating method and the additive Schwarz method for the biharmonic equation in this paper. We prove the convergence of the Schwarz methods from a new point of view, and provide detailed information about the convergence speeds and their dependence on the overlapping size of subdomains. The obtained results are independent of any unknown constant and discretization method, showing that the Schwarz alternating method converges twice as quickly as the additive Schwarz method.
基金Supported by National Natural Science Foundation of China under Grant No.11301269,and 11301266Jiangsu Provincial Natural Science Foundation of China under Grant No.BK20130665the Fundamental Research Funds KJ2013036 for the Central Universities
文摘A Laplace decomposition algorithm is adopted to investigate numerical solutions of a class of nonlinear partial differential equations with nonlinear term of any order, utt + auxx + bu + cup + du^2p-1 = 0, which contains some important equations of mathematical physics. Three distinct initial conditions are constructed and generalized numerical solutions are thereby obtained, including numerical hyperbolic function solutions and doubly periodic ones. Illustrative figures and comparisons between the numerical and exact solutions with different values of p are used to test the efficiency of the proposed method, which shows good results are azhieved.
基金supported by the National Natural Science Foundation of China(Grant Nos.12171047,11971458).
文摘A novel overlapping domain decomposition splitting algorithm based on a CrankNicolson method is developed for the stochastic nonlinear Schrödinger equation driven by a multiplicative noise with non-periodic boundary conditions.The proposed algorithm can significantly reduce the computational cost while maintaining the similar conservation laws.Numerical experiments are dedicated to illustrating the capability of the algorithm for different spatial dimensions,as well as the various initial conditions.In particular,we compare the performance of the overlapping domain decomposition splitting algorithm with the stochastic multi-symplectic method in[S.Jiang et al.,Commun.Comput.Phys.,14(2013),393-411]and the finite difference splitting scheme in[J.Cui et al.,J.Differ.Equ.,266(2019),5625-5663].We observe that our proposed algorithm has excellent computational efficiency and is highly competitive.It provides a useful tool for solving stochastic partial differential equations.
基金supported by the National Key R&D Plan(2020YFB1712902)the National Natural Science Foundation of China(52075036).
文摘In order to solve the flexible job shop scheduling problem with variable batches,we propose an improved multiobjective optimization algorithm,which combines the idea of inverse scheduling.First,a flexible job shop problem with the variable batches scheduling model is formulated.Second,we propose a batch optimization algorithm with inverse scheduling in which the batch size is adjusted by the dynamic feedback batch adjusting method.Moreover,in order to increase the diversity of the population,two methods are developed.One is the threshold to control the neighborhood updating,and the other is the dynamic clustering algorithm to update the population.Finally,a group of experiments are carried out.The results show that the improved multi-objective optimization algorithm can ensure the diversity of Pareto solutions effectively,and has effective performance in solving the flexible job shop scheduling problem with variable batches.
基金National Natural Science Foundation of China(No.51467008)。
文摘For photovoltaic power prediction,a kind of sparse representation modeling method using feature extraction techniques is proposed.Firstly,all these factors affecting the photovoltaic power output are regarded as the input data of the model.Next,the dictionary learning techniques using the K-mean singular value decomposition(K-SVD)algorithm and the orthogonal matching pursuit(OMP)algorithm are used to obtain the corresponding sparse encoding based on all the input data,i.e.the initial dictionary.Then,to build the global prediction model,the sparse coding vectors are used as the input of the model of the kernel extreme learning machine(KELM).Finally,to verify the effectiveness of the combined K-SVD-OMP and KELM method,the proposed method is applied to a instance of the photovoltaic power prediction.Compared with KELM,SVM and ELM under the same conditions,experimental results show that different combined sparse representation methods achieve better prediction results,among which the combined K-SVD-OMP and KELM method shows better prediction results and modeling accuracy.
基金Supported by the National Natural Science Foundation of China (60872073)~~
文摘A new method in digital hearing aids to adaptively localize the speech source in noise and reverberant environment is proposed. Based on the room reverberant model and the multichannel adaptive eigenvalue decomposition (MCAED) algorithm, the proposed method can iteratively estimate impulse response coefficients between the speech source and microphones by the adaptive subgradient projection method. Then, it acquires the time delays of microphone pairs, and calculates the source position by the geometric method. Compared with the traditional normal least mean square (NLMS) algorithm, the adaptive subgradient projection method achieves faster and more accurate convergence in a low signal-to-noise ratio (SNR) environment. Simulations for glasses digital hearing aids with four-component square array demonstrate the robust performance of the proposed method.
基金This work was funded by the Innovation Foundation of GAC R&D Center.
文摘This article presents a systematic research methodology of modular design for conceptual auto body frame by hybrid optimization method.A modified graph-based decomposition optimization algorithm is utilized to generate an optimal BIW assembly topo model composed of“potential modules”.The consistency constraint function in collaborative optimization is extended to maximize the commonality of modules and minimize the performance loss of all car types in the same product family simultaneously.A novel screening method is employed to select both“basic structures”and“reinforcement”modules based on the dimension optimization of the manufacturing elements and the optimal assembly mode;this allows for a more exhaustive modular platform design in contrast with existing methods.The proposed methodology is applied to a case study for the modular design of three conceptual auto body types in the same platform to validate its feasibility and effectiveness.
基金the Doctoral Programs Foundation ofMinistry of Education of China(No.20102125110002)
文摘In order to facilitate the scientific management of large-sized shipping companies, fleet planning under complicated circumstances has been studied. Based on multiple influencing factors such as the techno-economic status of ships, the investment capacity of company, the possible purchase of new ships, the buying/selling of second-hand vessels and the chartering/renting of ships, a mixed-integer programming model for fleet planning has been established. A large-sized shipping company is utilized to make an empirical study, and Benders decomposition algorithm is employed to test the applicability of the proposed model. The result shows that the model is capable for multi-route, multi-ship and large-scaled fleet planning and thus helpful to support the decision making of large-sized shipping companies.
基金supported by the Shanghai Municipal Education Commission (No. 05AZ74)the Shanghai Science and Technology Committee (No. 04JC14035)
文摘It is a non-polynomial complexity problem to calculate connectivity of the complex network. When the system reliability cannot be expressed as a function of element reliability, we have to apply some heuristic methods for optimization based on connectivity of the network. The calculation structure of connectivity of complex network is analyzed in the paper. The coefficient matrixes of Taylor second order expansion of the system connectivity is generated based on the calculation structure of connectivity of complex network. An optimal schedule is achieved based on genetic algorithms (GA). Fitness of seeds is calculated using the Taylor expansion function of system connectivity. Precise connectivity of the optimal schedule and the Taylor expansion function of system connectivity can be achieved by the approved Minty method or the recursive decomposition algorithm. When error between approximate connectivity and the precise value exceeds the assigned value, the optimization process is continued using GA, and the Taylor function of system connectivity needs to be renewed. The optimization process is called iterative GA. Iterative GA can be used in the large network for optimal reliability attribution. One temporary optimal result will be generated every time in the iteration process. These temporary optimal results approach the real optimal results. They can be regarded as a group of approximate optimal results useful in the real project.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(R.G.P1/155/40/2019)。
文摘A critical component of dealing with heart disease is real-time identifi-cation,which triggers rapid action.The main challenge of real-time identification is illustrated here by the rare occurrence of cardiac arrhythmias.Recent contribu-tions to cardiac arrhythmia prediction using supervised learning approaches gen-erally involve the use of demographic features(electronic health records),signal features(electrocardiogram features as signals),and temporal features.Since the signal of the electrical activity of the heartbeat is very sensitive to differences between high and low heartbeats,it is possible to detect some of the irregularities in the early stages of arrhythmia.This paper describes the training of supervised learning using features obtained from electrocardiogram(ECG)image to correct the limitations of arrhythmia prediction by using demographic and electrocardio-graphic signal features.An experimental study demonstrates the usefulness of the proposed Arrhythmia Prediction by Supervised Learning(APSL)method,whose features are obtained from the image formats of the electrocardiograms used as input.
文摘Conducting reasonable weapon-target assignment( WTA) with near real time can bring the maximum awards with minimum costs which are especially significant in the modern war. A framework of dynamic WTA( DWTA) model based on a series of staged static WTA( SWTA) models is established where dynamic factors including time window of target and time window of weapon are considered in the staged SWTA model. Then,a hybrid algorithm for the staged SWTA named Decomposition-Based Dynamic Weapon-target Assignment( DDWTA) is proposed which is based on the framework of multi-objective evolutionary algorithm based on decomposition( MOEA / D) with two major improvements: one is the coding based on constraint of resource to generate the feasible solutions, and the other is the tabu search strategy to speed up the convergence.Comparative experiments prove that the proposed algorithm is capable of obtaining a well-converged and well diversified set of solutions on a problem instance and meets the time demand in the battlefield environment.
基金supported by National Natural Science Foundation of China under Grants No. 61371087 and 61531013The Research Fund of Ministry of Education-China Mobile (MCM20150102)
文摘Cache-enabled small cell networks have been regarded as a promising approach for network operators to cope with the explosive data traffic growth in future 5 G networks. However, the user association and resource allocation mechanism has not been thoroughly studied under given content placement situation. In this paper, we formulate the joint optimization problem of user association and resource allocation as a mixed integer nonlinear programming(MINLP) problem aiming at deriving a balance between the total utility of data rates and the total data rates retrieved from caches. To solve this problem, we propose a distributed relaxing-rounding method. Simulation results demonstrate that the distributed relaxing-rounding method outperforms traditional max-SINR method and range-expansion method in terms of both total utility of data rates and total data rates retrieved from caches in practical scenarios. In addition, effects of storage and backhaul capacities on the performance are also studied.