Carnegie Mellon University
Browse

Distributed Asynchronous Online Learning for Natural Language Processing

Download (224.41 kB)
journal contribution
posted on 2010-06-01, 00:00 authored by Kevin Gimpel, Dipanjan Das, Noah A. Smith

Recent speed-ups for training large-scale models like those found in statistical NLP exploit distributed computing (either on multicore or “cloud” architectures) and rapidly converging online learning algorithms. Here we aim to combine the two. We focus on distributed, “mini-batch” learners that make frequent updates asynchronously (Nedic et al., 2001; Langford et al., 2009). We generalize existing asynchronous algorithms and experiment extensively with structured prediction problems from NLP, including discriminative, unsupervised, and non-convex learning scenarios. Our results show asynchronous learning can provide substantial speedups compared to distributed and singleprocessor mini-batch algorithms with no signs of error arising from the approximate nature of the technique.

History

Publisher Statement

Copyright 2010 ACL

Date

2010-06-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC