期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Stochastic differential equation software reliability growth model with change-point 被引量:1
1
作者 张楠 Cui Gang +1 位作者 Shu Yanjun Liu Hongwei 《High Technology Letters》 EI CAS 2014年第4期383-389,共7页
This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,consi... This paper presents software reliability growth models(SRGMs) with change-point based on the stochastic differential equation(SDE).Although SRGMs based on SDE have been developed in a large scale software system,considering the variation of failure distribution in the existing models during testing time is limited.These SDE SRGMs assume that failures have the same distribution.However,in practice,the fault detection rate can be affected by some factors and may be changed at certain point as time proceeds.With respect to this issue,in this paper,SDE SRGMs with changepoint are proposed to precisely reflect the variations of the failure distribution.A real data set is used to evaluate the new models.The experimental results show that the proposed models have a fairly accurate prediction capability. 展开更多
关键词 software reliability continuous state space stochastic differential equation (SDE) CHANGE-POINT
在线阅读 下载PDF
A parallel scheduling algorithm for reinforcement learning in large state space
2
作者 Quan LIU Xudong YANG +2 位作者 Ling JING Jin LI Jiao LI 《Frontiers of Computer Science》 SCIE EI CSCD 2012年第6期631-646,共16页
The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, i... The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem in large state space or continuous state space is decomposed into multiple smaller subproblems. Given a specific learning algorithm, each subproblem can be solved independently with limited available resources. In the end, component solutions can be recombined to obtain the desired result. To ad- dress the question of prioritizing subproblems in the scheduler, a weighted priority scheduling algorithm is proposed. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a new parallel method, called DCS-SPRL, is derived from combining DCS-SRL with a parallel scheduling architecture. In the DCS-SPRL method, the subproblems will be distributed among processors that have the capacity to work in parallel. The experimental results show that learning based on DCS-SPRL has fast convergence speed and good scalability. 展开更多
关键词 divide-and-conquer strategy parallel schedule SCALABILITY large state space continuous state space
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部