posted on 2007-01-01, 00:00authored byNathan D. Ratliff, J. Andrew Bagnell, Martin A. Zinkevich
Promising approaches to structured learning
problems have recently been developed in the
maximum margin framework. Unfortunately,
algorithms that are computationally and
memory efficient enough to solve large scale problems have lagged behind. We propose using
simple subgradient-based techniques for
optimizing a regularized risk formulation of these problems in both online and batch settings, and analyze the theoretical convergence, generalization,
and robustness properties of the resulting
techniques. These algorithms are are simple, memory efficient, fast to converge, and have small regret in the online setting. We also investigate a
novel convex regression formulation of structured
learning. Finally, we demonstrate the benefits
of the subgradient approach on three structured
prediction problems.