The Utility Cost of Robust Privacy Guarantees

Citation:

Hao Wang, Mario Diaz, Flavio P. Calmon, and Lalitha Sankar. 2018. “The Utility Cost of Robust Privacy Guarantees.” In Proc. IEEE Int. Symp. on Inf. Theory (ISIT). Link

Abstract:

Consider a data publishing setting for a data set with public and private features. The objective of the publisher is to maximize the amount of information about the public features in a revealed data set, while keeping the information leaked about the private features bounded. The goal of this paper is to analyze the performance of privacy mechanisms that are constructed to match the distribution learned from the data set. Two distinct scenarios are considered: (i) mechanisms are designed to provide a privacy guarantee for the learned distribution; and (ii) mechanisms are designed to provide a privacy guarantee for every distribution in a given neighborhood of the learned distribution. For the first scenario, given any privacy mechanism, upper bounds on the difference between the privacy-utility guarantees for the learned and true distributions are presented. In the second scenario, upper bounds on the reduction in utility incurred by providing a uniform privacy guarantee are developed.
Last updated on 09/23/2018