Carnegie Mellon University
Browse

A Comparative Approach to Understanding General Intelligence: Predicting Cognitive Performance in an Open-ended Dynamic Task

Download (1.46 MB)
journal contribution
posted on 2009-05-01, 00:00 authored by Christian LebiereChristian Lebiere, Cleotilde GonzalezCleotilde Gonzalez, Walter Warwick
The evaluation of an AGI system can take many forms. There is a long tradition in Artificial Intelligence (AI) of competitions focused on key challenges. A similar, but less celebrated trend has emerged in computational cognitive modeling, that of model comparison. As with AI competitions, model comparisons invite the development of different computational cognitive models on a well-defined task. However, unlike AI where the goal is to provide the maximum level of functionality up to and exceeding human capabilities, the goal of model comparisons is to simulate human performance. Usually, goodness-of-fit measures are calculated for the various models. Also unlike AI competitions where the best performer is declared the winner, model comparisons center on understanding in some detail how the different modeling "architectures" have been applied to the common task. In this paper we announce a new model comparison effort that will illuminate the general features of cognitive architectures as they are applied to control problems in dynamic environments. We begin by briefly describing the task to be modeled, our motivation for selecting that task and what we expect the comparison to reveal. Next, we describe the programmatic details of the comparison, including a quick survey of the requirements for accessing, downloading and connecting different models to the simulated task environment. We conclude with remarks on the general value in this and other model comparisons for advancing the science of AGI development.

History

Date

2009-05-01

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC