This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,consi...This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,considering the variation of failure distribution in the existing models during testing time is limited.These SDE SRGMs assume that failures have the same distribution.However,in practice,the fault detection rate can be affected by some factors and may be changed at certain point as time proceeds.With respect to this issue,in this paper,SDE SRGMs with changepoint are proposed to precisely reflect the variations of the failure distribution.A real data set is used to evaluate the new models.The experimental results show that the proposed models have a fairly accurate prediction capability.展开更多
The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, i...The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem in large state space or continuous state space is decomposed into multiple smaller subproblems. Given a specific learning algorithm, each subproblem can be solved independently with limited available resources. In the end, component solutions can be recombined to obtain the desired result. To ad- dress the question of prioritizing subproblems in the scheduler, a weighted priority scheduling algorithm is proposed. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a new parallel method, called DCS-SPRL, is derived from combining DCS-SRL with a parallel scheduling architecture. In the DCS-SPRL method, the subproblems will be distributed among processors that have the capacity to work in parallel. The experimental results show that learning based on DCS-SPRL has fast convergence speed and good scalability.展开更多
基金Supported by the International Science&Technology Cooperation Program of China(No.2010DFA14400)the National Natural Science Foundation of China(No.60503015)the National High Technology Research and Development Programme of China(No.2008AA01A201)
文摘This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,considering the variation of failure distribution in the existing models during testing time is limited.These SDE SRGMs assume that failures have the same distribution.However,in practice,the fault detection rate can be affected by some factors and may be changed at certain point as time proceeds.With respect to this issue,in this paper,SDE SRGMs with changepoint are proposed to precisely reflect the variations of the failure distribution.A real data set is used to evaluate the new models.The experimental results show that the proposed models have a fairly accurate prediction capability.
基金Acknowledgements This paper was supported by the National Natural Science Foundation of China (Grant Nos. 61272005, 61070223, 61103045, 60970015, and 61170020), Natural Science Foundation of Jiangsu (BK2012616, BK2009116), High School Natural Foundation of Jiangsu (09KJA520002), and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University (93K172012K04).
文摘The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem in large state space or continuous state space is decomposed into multiple smaller subproblems. Given a specific learning algorithm, each subproblem can be solved independently with limited available resources. In the end, component solutions can be recombined to obtain the desired result. To ad- dress the question of prioritizing subproblems in the scheduler, a weighted priority scheduling algorithm is proposed. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a new parallel method, called DCS-SPRL, is derived from combining DCS-SRL with a parallel scheduling architecture. In the DCS-SPRL method, the subproblems will be distributed among processors that have the capacity to work in parallel. The experimental results show that learning based on DCS-SPRL has fast convergence speed and good scalability.