Carnegie Mellon University
Browse

Improved Guarantees for Agnostic Learning of Disjunctions

Download (136.62 kB)
journal contribution
posted on 2008-01-01, 00:00 authored by Pranjal Awasthi, Avrim Blum, Or Sheffet
Given some arbitrary distribution D over {0, 1}<sup>n</sup> and arbitrary target function c∗, the problem of agnostic learning of disjunctions is to achieve an error rate comparable to the error OPT<sub>disj</sub> of the best disjunction with respect to (D, c∗). Achieving error O(n · OPT<sub>disj</sub>) + ∈ is trivial, and Winnow [12] achieves error O(r ·OPT<sub>disj</sub> )+∈, where r is the number of relevant variables in the best disjunction. In recent work, Peleg [13] shows how to achieve a bound of ˜O(√n·OPT<sub>disj</sub> )+ ∈ in polynomial time. In this paper we improve on Peleg’s bound, giving a polynomial-time algorithm achieving a bound of <p> O(n<sup>1/3+α </sup>· OPT<sub>disj</sub>) + ∈ </p><p> for any constant α > 0. The heart of the algorithmis amethod forweak-learningwhenOPT<sub>disj</sub> = O(1/n<sup>1/3+α </sup>), which can then be fed into existing agnostic boosting procedures to achieve the desired guarantee.</p>

History

Date

2008-01-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC