Carnegie Mellon University
Browse

Comparative Study of Boosting and Non-Boosting Training for Constructing Ensembles of Acoustic Models

Download (50.35 kB)
journal contribution
posted on 1997-05-01, 00:00 authored by Rong Zhang, Alexander RudnickyAlexander Rudnicky

This paper compares the performance of Boosting and nonBoosting training algorithms in large vocabulary continuous speech recognition (LVCSR) using ensembles of acoustic models. Both algorithms demonstrated significant word error rate reduction on the CMU Communicator corpus. However, both algorithms produced comparable improvements, even though one would expect that the Boosting algorithm, which has a solid theoretic foundation, should work much better than the non-Boosting algorithm. Several voting schemes for hypothesis combining were evaluated, including weighted voting, un-weighted voting and ROVER.

History

Publisher Statement

© ACM, 1997. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution.

Date

1997-05-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC