Carnegie Mellon University
Browse

IRT Modeling of Tutor Performance To Predict End-of-year Exam Scores

Download (264.09 kB)
journal contribution
posted on 2012-04-10, 00:00 authored by Elizabeth Ayers, Brian W. Junker

Interest in end-of-year accountability exams has increased dramatically since the passing of the NCLB law in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on end-of-year exams. In this paper we use student performance on the Assistment System, an on-line mathematics tutor, to show that replacing percent correct with an Item Response Theory (IRT) estimate of student proficiency leads to better fitting prediction models. In addition, other tutor performance metrics are used to further increase prediction accuracy. Finally we calculate prediction error bounds to attain an absolute measure to which our models can be compared.

History

Publisher Statement

All Rights Reserved

Date

2012-04-10

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC