Bounding Information Leakage in Machine Learning
Recently, it has been shown that Machine Learning models can leak sensitive information about their training data. This information leakage is exposed through membership and attribute inference attacks. Although many at-tack strategies have been proposed, little effort has been made to formalize these problems.
