Carnegie Mellon University
Browse

Unsupervised Word Alignment with Arbitrary Features

Download (296.17 kB)
journal contribution
posted on 2011-06-01, 00:00 authored by Chris Dyer, Jonathan Clark, Alon LavieAlon Lavie, Noah A. Smith

We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, nonindependent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs.

History

Publisher Statement

Copyright 2011 ACL

Date

2011-06-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC