摘要
为了降低实时监控而增加的网络管理负担,提出一种分层多管理者网络故障监控策略.应用多代理马尔可夫决策过程,建立了一种新的多管理者网络故障监控机制,并给出了该机制下基于强化学习的轮询策略.采用这种基于多管理者的马尔可夫决策过程的分层网络故障管理技术,缩短了轮询次数,并能准确地发现网络故障,同时减少网络管理的信息开销.
In order to reduce the overhead caused by real-time network monitoring, a policy for fault monitoring of a hierarchical network with a multi-manager is proposed. This fault monitoring policy is based on the model of multi-agent Markov Decision Processes and makes use of the reinforcement learning mechanism. Simulation results show that the proposed scheme based on MMDP makes the network management more efficient by accurately detecting the faulty node and significantly reducing the management overhead.
出处
《西安电子科技大学学报》
EI
CAS
CSCD
北大核心
2005年第6期873-876,906,共5页
Journal of Xidian University
基金
综合业务网理论与关键技术国家重点实验室基金资助项目(00JS63.2.1.DZ01)