Language Modeling for Dialog System
Language modeling for speech recognizer in dialog systems can take two forms. Human input can be constrained through a directed dialog, allowing the decoder to use a state-specific language model to improve recognition accuracy. Mixedinitiative systems allow for human input that while domainspecific might not be state-specific. Nevertheless, for the most part human input to a mixed-initiative system is predictable, particularly when given information about the immediately preceding system prompt. The work reported in this paper addresses the problem of balancing state-specific and general language modeling in a mixed-initiative dialog system. By
incorporating dialog state adaptation of the language model, we have reduced the recognition error rate by 11.5%