posted on 1981-01-01, 00:00authored byJohn Langford, Avrim Blum
A major topic in machine learning is to determine good upper bounds on the true error rates of learned
hypotheses based upon their empirical performance on training data. In this paper, we demonstrate new adaptive
bounds designed for learning algorithms that operate by making a sequence of choices. These bounds, which
we call Microchoice bounds, are similar to Occam-style bounds and can be used to make learning algorithms
self-bounding in the style of Freund (1998). We then show how to combine these bounds with Freund’s querytree
approach producing a version of Freund’s query-tree structure that can be implemented with much more
algorithmic efficiency.