Carnegie Mellon University
Browse
Deciding when to stop: efficient experimentation to learn to pred.pdf.pdf' (1.09 MB)

Deciding when to stop: efficient experimentation to learn to predict drug-target interactions.

Download (1.09 MB)
journal contribution
posted on 2014-01-01, 00:00 authored by Maja Temerinac-Ott, Armaghan W. Naik, Robert MurphyRobert Murphy

BACKGROUND: Active learning is a powerful tool for guiding an experimentation process. Instead of doing all possible experiments in a given domain, active learning can be used to pick the experiments that will add the most knowledge to the current model. Especially, for drug discovery and development, active learning has been shown to reduce the number of experiments needed to obtain high-confidence predictions. However, in practice, it is crucial to have a method to evaluate the quality of the current predictions and decide when to stop the experimentation process. Only by applying reliable stopping criteria to active learning can time and costs in the experimental process actually be saved.

RESULTS: We compute active learning traces on simulated drug-target matrices in order to determine a regression model for the accuracy of the active learner. By analyzing the performance of the regression model on simulated data, we design stopping criteria for previously unseen experimental matrices. We demonstrate on four previously characterized drug effect data sets that applying the stopping criteria can result in upto 40 % savings of the total experiments for highly accurate predictions.

CONCLUSIONS: We show that active learning accuracy can be predicted using simulated data and results in substantial savings in the number of experiments required to make accurate drug-target predictions.

History

Publisher Statement

© The Author(s) 2013. Published by Oxford University Press.

Date

2014-01-01

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC