Carnegie Mellon University
Browse

Improved Guarantees for Agnostic Learning of Disjunctions

Download (136.62 kB)
journal contribution
posted on 2008-01-01, 00:00 authored by Pranjal Awasthi, Avrim Blum, Or Sheffet
Given some arbitrary distribution D over {0, 1}n and arbitrary target function c∗, the problem of agnostic learning of disjunctions is to achieve an error rate comparable to the error OPTdisj of the best disjunction with respect to (D, c∗). Achieving error O(n · OPTdisj) + ∈ is trivial, and Winnow [12] achieves error O(r ·OPTdisj )+∈, where r is the number of relevant variables in the best disjunction. In recent work, Peleg [13] shows how to achieve a bound of ˜O(√n·OPTdisj )+ ∈ in polynomial time. In this paper we improve on Peleg’s bound, giving a polynomial-time algorithm achieving a bound of

O(n1/3+α · OPTdisj) + ∈

for any constant α > 0. The heart of the algorithmis amethod forweak-learningwhenOPTdisj = O(1/n1/3+α ), which can then be fed into existing agnostic boosting procedures to achieve the desired guarantee.

History

Date

2008-01-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC