Accounting for outcome and process measures in dynamic decision-making tasks through model calibration Varun Dutt Cleotilde Gonzalez 10.1184/R1/6570983.v1 https://kilthub.cmu.edu/articles/journal_contribution/Accounting_for_outcome_and_process_measures_in_dynamic_decision-making_tasks_through_model_calibration/6570983 Computational models of learning and the theories they represent are often validated by calibrating them to human data on decision outcomes. However, only a few models explain the process by which these decision outcomes are reached. We argue that models of learning should be able to reflect the process through which the decision outcomes are reached, and validating a model on the process is likely to help simultaneously explain both the process as well as the decision outcome. To demonstrate the proposed validation, we use a large dataset from the Technion Prediction Tournament and an existing Instance-based Learning model. We present two ways of calibrating the model’s parameters to human data: on an outcome measure and on a process measure. In agreement with our expectations, we find that calibrating the model on the process measure helps to explain both the process and outcome measures compared to calibrating the model on the outcome measure. These results hold when the model is generalized to a different dataset. We discuss implications for explaining the process and the decision outcomes in computational models of learning. 2015-09-01 00:00:00 outcome and process measures computational models of learning Instance-based learning dynamic decisions binary choice calibration