Carnegie Mellon University
Browse

Fast Distributed Gradient Methods

Download (541.87 kB)
journal contribution
posted on 2014-04-01, 00:00 authored by Dusan Jakovetic, Joao Xavier, José M. F. Moura

We study distributed optimization problems when N nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant L), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications K and the per-node gradient evaluations k. Our first method, Distributed NesterovGradient, achieves rates O( logK/K) and O(logk/k). Our second method, Distributed Nesterov gradientwith Consensus iterations, assumes at all nodes knowledge of L and μ(W) - the second largest singular value of the N ×N doubly stochastic weight matrix W. It achieves rates O( 1/ K2-ξ) and O( 1/k2) ( ξ > 0 arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on N and W. Simulation examples illustrate our findings.

History

Publisher Statement

© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

Date

2014-04-01