posted on 2011-09-01, 00:00authored byVamshi Ambati, Stephan Vogel, Jaime G. Carbonell
This paper investigates active learning to improve statistical machine translation (SMT) for low-resource language pairs, i.e., when there is very little pre-existing parallel text. Since generating additional parallel text to train SMT may be costly, active sampling selects the sentences from a monolingual corpus which if translated would have maximal positive impact in training SMT models. We investigate different strategies such as density and diversity preferences as well as multistrategy methods such as modified version of DUAL and our new ensemble approach GraDUAL. These result in significant BLEU-score improvements over strong baselines when parallel training data is scarce.