Relational learning is a subfield of artificial intelligence, that learns with expressive logical or relational representations. In this thesis, I consider the problem of efficient relational learning. I describe a new relational learning approach based on path-constrained random walks, and demonstrate, with extensive experiments on IR and NLP tasks, how relational learning can be applied at a scale not possible before. This scalability is made possible by defining a family of easy-to-learn features, fast random walk methods, and distributed computing.