Discriminative Optimization: Theory and Applications to Computer Vision
Many computer vision problems are formulated as the optimization of a cost function. This approach faces two main challenges: (i) designing a cost function with a local optimum at an acceptable solution, and (ii) developing an efficient numerical method to search for one (or multiple) of these local optima. While designing such functions is feasible in the noiseless case, the stability and location of local optima are mostly unknown under noise, occlusion, or missing data. In practice, this can result in undesirable local optima or not having a local optimum in the expected solution. On the other hand, numerical optimization algorithms in high-dimensional spaces are typically local and often rely on expensive first or second order information to guide the search. To overcome these limitations, we propose Discriminative Optimization (DO), a method that learns search directions from data without the need of a cost function. Specifically, DO learns a sequence of updates in the search space that leads to stationary points corresponding to the desired solutions. Using training data, DO can find solutions that are more robust to perturbation of real data, unlike conventional optimization which may fail if there is a mismatch between the cost function and the noise distribution. We provide a formal analysis of DO, proving its convergence in the training phase. We also explore the relation between DO and generalized convexity and monotonicity, and show that the conditions for the convergence of DO are broader than those required by convexity. In terms of applications, we illustrate DO’s potential in the problems of 3D point cloud registration, camera pose estimation, and image denoising. We show that DO can generally outperform state-of-theart algorithms in terms of accuracy, robustness to perturbations, and computational efficiency.