End-to-End Speech Recognition on Conversations

2020-01-22T19:35:55Z (GMT) by Suyoun Kim
Processing of conversations is a core technique in conversational AI. However, current speech recognition solutions, even the state-of-the-art systems, model a single, isolated utterance, not an entire conversation. These systems are therefore unable to use potentially important contextual information that spans across multiple utterances or speakers in a conversation. This thesis focuses
on designing an End-to-End speech recognition system that processes entire conversations. To achieve this goal, I propose three novel techniques: 1) an efficient way to preserve long conversational contexts by creating a context encoder that maps spoken utterance histories to a single
context vector; 2) an effective way to integrate conversational contexts into End-to-End models
using a gating mechanism; and 3) various methods to encode conversational contexts by using previously spoken utterances and augmenting with world knowledge using external linguistic resources (e.g. BERT, fastText). I show accuracy improvements with three different large corpora,
Switchboard (300 hours), Fisher (2,000 hours), and Medical conversation (1,700 hours), and share the analysis to demonstrate the effectiveness of my approach. This thesis will provide insight into designing conversational speech recognition systems and spoken language understanding
systems, which are becoming increasingly important as voice-driven device interfaces become mainstream.