Active Object Recognition by Offline Solving of POMDPs
In this paper, we address the problem of recognizing multiple known objects under partial views and occlusion. We consider the situation in which the view of the camera can be controlled in the sense of an active perception planning problem. One common approach consists of formulating such active object recognition in terms of information theory, namely to select actions that maximize the expected value of the observation in terms of the recognition belief. In our work, instead we formulate the active perception planning as a Partially Observable Markov Decision Process (POMDP) with reward solely associated with minimization of the recognition time. The returned policy is the same as the one obtained using the information value. By recognizing observations as a time consuming process and imposing constrains on time, we minimize the number of observations and consequently maximize the value of each one for the recognition task. Separating the reward from the belief in the POMDP enables solving the planning problem offline and the recognition process itself becomes less computationally intensive. In a focused simulation example we illustrate that the policy is optimal in the sense that it performs the minimum number of actions and observation required to achieve recognition.