Improved storage, sensing, and automated data collection technology has resulted in a world full of data that are noisy, incomplete, high-dimensional, and of astronomical size.
This abundance of data motivates data-driven approaches to signal processing, while their messiness calls for more robust and accurate inference methods, e.g. by leveraging the structure of the data. Graphs are a natural choice to represent a diversity of structures inherent in data. Many physical signals are generated from graph-structured objects such as road and sensor networks, and time-series signals and images can be generalized to graph signals. Moreover, graphs can encode complex relationships and interactions between objects, e.g. via similarity graphs. This thesis studies how graph regularization can help solve challenging inference problems more accurately and quickly. Graph regularization is a flexible technique that drives
solutions of optimization problems to have desired properties with respect to a graph. We incorporate graph regularization into various learning tasks (denoising, matrix factorization, and distributed multitask learning) as well as signals of varying data complexity (scalars, vectors, and matrices). We first analyze the performance of non-convex regularizers in denoising, and observe both theoretically and experimentally that the power of graph regularization is bounded by how accurately the graph captures the underlying structure in the data. Next, we propose a fast algorithm to solve matrix factorization with a total-variation-based regularizer, and illustrate its application to hyperspectral unmixing. Finally, we derive a simple fusion framework for distributed multitask learning that linearly combines local estimates based on the task similarities and difficulties. This circumvents the complications around data sharing, e.g. privacy, and requires only one round
of communication. The proposed method is instantiated to linear regression and principal component analysis (PCA), and is verified on simulated data.