Modern machine learning systems are trained on massive amounts of data. It turns out that, without special care, machine learning models are prone to regurgitating or otherwise revealing information about individual data points. This is problematic when parts of the training data are sensitive or contain private information, as is commonly the case in many settings of interest. Dr. Kamath will discuss differential privacy, a rigorous notion of data privacy, and how it can be used to provably protect against such inadvertent data disclosures by machine learning models.