<p dir="ltr">The rapid advancement of large-scale machine learning systems, driven by increased computational power and massive training datasets, has revolutionized fields ranging from healthcare to financial forecasting. However, the deployment of these systems raises critical privacy concerns, particularly as models are trained on sensitive data including personal images, patient records, and copyrighted content. These issues are particularly acute in federated learning, where sensitive user data may be inadvertently exposed to model owners or other participants. In addition, large generative AI models introduce novel privacy and security vulnerabilities, including unintended memorization of training data and potential copyright infringement. However, enforcing privacy usually comes with a compromise of the utility of large-scale ML models and systems. Mitigating such tensions is important to push the frontiers of deploying privacy-enhancing techniques to real world large-scale AI products. </p><p dir="ltr">In this thesis, our aim is to find ways to improve the fundamental challenge of balancing privacy protection with model utility in real-world machine learning applications. We investigate three key research questions to improve privacy-utility trade-offs in modern ML systems. We start by looking at the most classic image classification setting, where we introduce a variant of Gaussian mechanism using bounded support. We provide novel accounting and proof showing that the proposed mechanism provably amplifies privacy. As a result, we observe improved privacy-utility trade-offs during empirical evaluations on large image classification model training. We then move to a more advanced setting known as federated learning where data is distributed across different clients rather than collected as a centralized dataset. Under such a setting, we develop a formalization and efficient algorithms for privacy protection through personalization techniques, showing that multi-task learning effectively improves privacy-utility trade-offs at both client-level and sample-level protection. Finally, we move to privacy for the most up-to-date Large Language Models (LLMs). Due to limitations of traditional principled privacy protection tools, we examine machine unlearning as a privacy protection mechanism for LLMs, revealing critical limitations in current unlearning heuristics that neither effectively forget sensitive information nor preserve utility for the unsensitive retain knowledge.</p>