Learning from Real-Time Over-the-Shoulder Instructions in a Dynamic Task
We extended the work of Anderson et al. (in press) by modeling how people learn from “over-the-shoulder” instructions – instructions given immediately after actions were executed – while participants were working on a Anti-Air Warfare Coordinator (AAWC) task. Specifically, we modeled the incremental top-down influence of instructions on the visual search process. We constructed a model that first responded to and converted the over-the-shoulder instructions into declarative memory chunks. These declarative chunks of instructions were strengthened with repeated exposures to these instructions. The model then incrementally learned to improve the selection of next track as the strengths of these declarative chunks of instructions increased, and performance declined as the instructions decayed over time. The model was able to fit the data well, suggesting that the model was able to capture the effects of the instructions on learning and performance in this dynamic task.