期刊文献+

口语对话系统的POMDP模型及求解 被引量:7

POMDP MODEL AND ITS SOLUTION FOR SPOKEN DIALOGUE SYSTEM
在线阅读 下载PDF
导出
摘要 许多口语对话系统已进入实用阶段 ,但一直没有很好的对话管理模型 .把对话管理看做随机优化问题 ,用马尔科夫决策过程 (MDP)来建模是最近出现的方向 ,但是对话状态的不确定性使 MDP不能很好地反映对话模型 .提出了一种新的基于部分可观察 MDP(POMDP)的口语对话系统模型 ,用部分可观察特性来处理不确定问题 .由于精确求解算法的局限性 ,考察了许多启发式近似算法在该模型中的适用性 ,并改进了部分算法 ,如对于格点近似算法 。 It seems that no excellent model is available for the design of dialogue manager although many spoken dialogue systems have come into practical use in recent years. Using Markov decision process (MDP) is an emerging direction that regards the dialogue strategy selection as a stochastic optimization problem. But the MDP model can't fully reflect the characteristics of a dialogue system because of the uncertainty in the dialogue state. Based on the partially observable MDP (POMDP), a new model for a spoken dialogue system is proposed. It uses the concept of partially observable to handle the uncertainty. Due to the limitation of the exact algorithms, emphais is put on heuristic approximation algorithms and their applicability in the dialogue system POMDP. Two methods for grid point selection are proposed in grid based approximation algorithms.
出处 《计算机研究与发展》 EI CSCD 北大核心 2002年第2期217-224,共8页 Journal of Computer Research and Development
关键词 口语对话系统 马尔科夫决策过程 近似求解算法 POMDP模型 语音识别 spoken dialogue system, POMDP model, approximation algorithm
  • 相关文献

参考文献8

  • 1[1]M A Walker. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artificial Intelligence Research, 2000, 12: 387~416
  • 2[2]E Levin, R Pieraccini, W Eckert. Using Markov decision process for learning dialogue strategies. In: Proc of Int'l Conf on Acoustics, Speech, and Signal Processing (ICASSP-97). Munich, Germany, 1997
  • 3[3]T Paek, E Horvitz. Uncertainty, utility, and misunder-standing: A decision-theoretic perspective on grounding in conversational systems. AAAI Fall Symp on Psychological Models of Communication in Collaborative Systems. North Falmouth, Massachusetts, USA, 1999
  • 4[4]L P Kaelbling, M L Littman, A R Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 1998, 101: 99~134
  • 5[5]N Roy, J Pineau, S Thrun. Spoken dialogue management using probabilistic reasoning. In: Proc of the 38th Annual Meeting of the Association for Computational Linguistics (ACL-2000). Hong Kong, 2000
  • 6[6]M Hauskrecht. Value-function approximations for partially observable Markov decision problems. Journal of Artificial Intelligence Research, 2000, 13: 33~94
  • 7[7]C Boutilier, D Poole. Computing optimal policies for partially observable decision processes using compact representations. In: Proc of the 13th National Conf on Artificial Intelligence (AAAI-96). 1996. Portland, Oregon, USA, 1168~1175
  • 8[8]A Cassandra, M L Littman, N L Zhang. Incremental pruning: A simple, fast, exact algorithm for partially observable Markov decision processes. In: Proc of the 13th Annual Conf on Uncertainty in Artificial Intelligence (UAI-97). Providence, Rhode Island, USA, 1997. 54~61

同被引文献104

引证文献7

二级引证文献73

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部