Privacy in Machine Learning and Artificial Intelligence

FAIM 2018 Workshop | Stockholm, July 15 | Stockholmmässan Stockholm (Room C2)

PiMLAI'18

Scope

The one-day workshop focuses on the technical aspects of privacy research with invited and contributed talks by distinguished researchers in the area. We will conclude the workshop with a panel discussion about ethical and regulatory aspects. The programme of the workshop will emphasize the diversity of points of view on the problem of privacy, exemplified by the approaches pursued by specific sub-communities scattered across the different meetings comprising the Federated Artificial Intelligence Meeting. We will also ensure there is ample time for discussions that encourage networking between researches from these different sub-communities, which should result in mutually beneficial new long-term collaborations.

Invited Speakers

  • Catuscia Palamidessi (INRIA)
  • Local Differential Privacy on Metric Spaces: optimizing the trade-off with utility.
    Local differential privacy (LPD) is a distributed variant of differential privacy (DP) in which the obfuscation of the sensitive information is done at the level of the individual records, and in general it is used to sanitize data that are collected for statistical purposes. LPD has the same properties of compositionality and independence from the prior as DP, and it has the further advantages that (a) each user can choose the level of privacy he wishes, (b) it does not need to assume a trusted third party, and (c) since all stored records are individually-sanitized, there is no risk of privacy leaks due to security breaches. On the other hand LDP in general requires more noise than DP to achieve the same level of protection, with negative consequences on the utility. In practice, utility becomes acceptable only on very large collections of data, and this is the reason why LDP is especially successful among big companies such as Google and Apple, which can count on a collecting data from a huge number of users.
    In this talk, we propose a variant of LDP suitable for metric spaces, such as location data or energy consumption data, and we show that it provides a much higher utility for the same level of privacy. Furthermore, we discuss algorithms to extract the best possible statistical information from the data obfuscated with this metric variant of LDP.
  • Úlfar Erlingsson (Google)
  • Machine Learning Privacy Problems and Practice.
    For the last several years, Google has been leading the development and real-world deployment of state-of-the-art, practical techniques for learning statistics and ML models with strong privacy guarantees for the data involved. I'll introduce this work, and the RAPPOR and Prochlo mechanisms for learning statistics in the Chromium and Fuschia open-source projects. Then I'll present a new "exposure" metric to estimate the privacy problems due to unintended memorization in machine learning models—and how it can allow extracting individual secrets, such as social security numbers. Finally, I'll give an overview of the practical techniques we've developed for training Deep Neural Networks with strong privacy guarantees, based on Differentially-Private Stochastic Gradient Descent and Private Aggregation of Teacher Ensembles.
  • Pınar Yolum (Utrecht)
  • Semantic Approaches for Collaborative Privacy Management in Online Social Networks
    Privacy is a major concern in online social networks. Online social networks allow users to specify privacy concerns to some extent, but enforcing them over distributed content is difficult. The main reason for this is that the users are allowed to create and share content about themselves as well as about others. When multiple entities start distributing content without a control, information can reach unintended individuals. Since privacy constraints of these users may be different from each other, privacy disputes occur. Ideally, all relevant users of a content must be able to engage in a discussion of their privacy constraints so that they can agree on whether to share the content and if so with whom.
    This talk will discuss our recent work on collaborative privacy management to resolve disputes among users in online social networks, with a focus on argumentation and negotiation. Our work is based on representing each user in an online social network with an agent that is responsible for managing and enforcing its user's privacy constraints. When an agent wants to share a post, an agreement session starts between the agent and other relevant agents. The agents provide each other arguments to express their privacy stance and try to convince each other that their claim is true. At the end of the session, the system decides whether sharing the post is justified according to the provided arguments of the agents.

Accepted Papers

Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Guillermo Sapiro
Learning and Deciding Our Own Privacy in a Collaborative System
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes
Revisiting Membership Inference Attacks Against Machine Learning Models
Seda Gurses, Rebekah Overdorf, Ero Balsa
POTs: The revolution will not be optimized?
Abdurrahman Can Kurtan, Pınar Yolum
PELTE: Privacy Estimation of Images from Tags
Maria-Florina Balcan, Travis Dick, Ellen Vitercik
Dispersion for Private Optimization of Piecewise Lipschitz Functions
Phillipp Schoppmann, Hendrik Borchert, Bjorn Scheuermann
Distributed Linear Regression with Differential Privacy
Sam Leroux, Tim Verbelen, Pieter Simoens, Bart Dhoedt
Privacy Aware Offloading of Deep Neural Networks
Adria Gascon, Borja Balle, Phillipp Schoppman
Private Nearest Neighbors Classification in Federated Databases
Joonas Jalko, Antti Honkela, Samuel Kaski
Privacy-aware data sharing
Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro
LOGAN: Membership Inference Attacks Against Generative Models
Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal
Generative Adversarial Privacy
Zhenyu Wu, Zhangyang Wang, Zhaowen Wang, Hailin Jin
Towards Privacy-Preserving Visual Recognition via Adversarial Training
Vasyl Pihur, Aleksandra Korolova, Frederick Liu, Subhash Sankuratripati, Moti Yung, Dachuan Huang, Ruogu Zeng
Differentially-private "Draw and Discard" Machine Learning
Raman Arora, Vladimir Braverman, Jalaj Upadhyay
Differentially Private Robust PCA
Aaron Schein, Zhiwei Steven Wu, Alexandra Schofield, Mingyuan Zhou, Hanna Wallach
Locally Private Bayesian Inference for Count Models
Eleftheria Makri, Dragos Rotaru, Nigel Smart, Frederik Vercauteren
EPIC: Efficient Private Image Classification (or: Learning from the Masters)
Niki Kilbertus, Adria Gascon, Matt Kusner, Michael Veale, Krishna Gummadi, Adrian Weller
Blind Justice: Fairness with Encrypted Sensitive Attributes
Amartya Sanyal, Matt Kusner, Adria Gascon, Varun Kanade
Encrypted Prediction as a Service
Teppo Niinimaki, Mikko Heikkila, Samuel Kaski, Antti Honkela
Deep Transfer Learning of Representations for Differentially Private Learning
Matthias Matousek, Christoph Bosch, Frank Kargl
Privacy-Preserving Decision Trees
Kyle Fritchman, Rafael Dowsley, Tyler Hughes, Martine De Cock, Anderson Nascimento, Ankur Teredesai
Privacy-Preserving Scoring of Tree Ensembles: A Novel Framework for AI in Healthcare
Edward Chou, Josh Beal, Albert Haque, Li Fei-Fei
Delegated Feature Extraction for Encrypted Inference

Travel Grants

Grants are available to help partially cover the travel expenses of students and researchers attending the workshop. Each grant will reimburse registration costs and travel expenses up to a maximum of 700 euros. We might be unable to provide awards to all applicants, in which case awards will be determined by the organizers based on the application material.

Applications are due on June 4, 2018.

An application for a travel award will consist of a single PDF file with a justification of financial needs, a summary of research interests, and a brief discussion of why the applicant will benefit from participating in the workshop. Please send your applications to pimlai18@easychair.org with the subject title "PiMLAI Travel Grant".

Sponsored by:

Call For Papers & Important Dates

Download Full CFP Submit Your Abstract


Abstract submission: May 14, 2018 (11pm59 CET)
Notification of acceptance: May 29, 2018
Late breaking results submissions: June 15, 2018
Notification of acceptance : June 20, 2018
Workshop: July 15, 2018

We invite submissions of recent work on privacy in machine learning and artificial intelligence, both theory and application-oriented. Similarly to how ICML, IJCAI, AAMAS, and other FAIM workshops are organized, all accepted abstracts will be part of a poster session held during the workshop. Additionally, the PC will select a subset of the abstracts for short oral presentations. At least one author of each accepted abstract is expected to represent it at the workshop.

Submissions in the form of extended abstracts must be at most 2 pages long (not including references) and adhere to the ICML format. We do accept submissions of work recently published or currently under review. Submissions do not need to be anonymized. The workshop will not have formal proceedings, but authors of accepted abstracts can choose to have their work published on the workshop webpage.

Solicited topics include, but are not limited to:

  • Differential privacy: theory, applications, and implementations

  • Privacy in internet of things and multi-agent systems

  • Privacy-preserving machine learning

  • Trade-offs between privacy and utility

  • Programming languages for privacy-preserving data analysis

  • Statistical notions of privacy, including relaxations of differential privacy

  • Empirical and theoretical comparisons between different notions of privacy

  • Privacy attacks

  • Policy-making aspects of data privacy

  • Secure multi-party computation techniques for machine learning

  • Learning on encrypted data, homomorphic encryption

  • Distributed privacy-preserving algorithms

  • Normative approaches to privacy in AI

  • Privacy in autonomous systems

  • Online social networks privacy

Organization


Workshop organizers

  • Borja Balle (Amazon Research Cambridge)
  • Antti Honkela (University of Helsinki)
  • Kamalika Chaudhuri (UCSD CSE)
  • Beyza Ermis (Amazon Research Berlin)
  • Jose Such (King's College London)
  • Mijung Park (MPI Tuebingen)

Program Committee

  • Adria Gascon (Turing Institute)
  • Anand Sarwate (Rutgers University)
  • Aurelien Bellet (INRIA)
  • Carmela Troncoso (EPFL)
  • Christos Dimitrakakis (Chalmers University)
  • Emiliano De Cristofaro (UCL)
  • Gaurav Misra (University of New South Wales)
  • Joseph Geumlek (UCSD CSE)
  • Marco Gaboardi (University of Buffalo, SUNY)
  • Maziar Gomrokchi (McGill University)
  • Michael Brueckner (Amazon Research Berlin)
  • Nadin Kokciyan (King's College London)
  • Olya Ohrimenko (Microsoft Research)
  • Ozgur Kafali (University of Kent)
  • Pauline Anthonysamy (Google)
  • Peter Kairouz (Stanford University)
  • Phillipp Schoppmann (Humboldt)
  • Pradeep Murukannaiah (Rochester Institute of Technology)
  • Shuang Song (UCSD CSE)
  • Yu-Xiang Wang (Amazon AWS)

  • Sponsors