Towards the Application of Reinforcement Learning to Undirected Developmental Learning
We consider the problem of how a learning agent in a continuous and dynamic world can autonomously learn about itself, its environment, and how to perform simple actions. In previous work we showed how an agent could learn an abstraction consisting of contingencies and distinctions. In this paper we propose a method whereby an agent using this abstraction can create its own reinforcement learning problems. The agent generates an internal signal that motivates it to move into states in which a contingency will hold. The agent then uses reinforcement learning to learn to move to those states effectively. It can then use the knowledge acquired through reinforcement learning as part of simple actions. We evaluate this work using a simulated physical agent that affects its world using two continuous motor variables and experiences its world using a set of fifteen continuous and discrete sensor variables. We show that by using this method the learning agent can autonomously generate reinforcement learning problems allowing it to do simple tasks, and we compare its performance to that of a hand-created reinforcement learner using tile coding.