Multilingual Bottle-Neck Features and its Application for Under-Resourced Languages
In this paper we present our latest investigation on multilingual bottle-neck (BN) features and its application to rapid language adaptation to new languages. We show that the overall performance of a Multilayer Perceptron (MLP) network improves significantly by initializing it with a multilingual MLP. Furthermore, ASR performance increases on both, on those languages which were used for multilingual MLP training, and on a new language. We propose a new strategy called “open target language” MLP to train more flexible models for language adaptation, which is particularly suited for small amounts of training data. The final results on the Vietnamese GlobalPhone database gave 15.8% relative improvement in terms of Syllable Error Rate (SyllER) for the ASR system trained with 22.5h data and 16.9% relative gains for the system trained with only 2h data.