Scope and Topics

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.
In its fourth edition the, AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-23) provides a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.

PPAI-23 will place particular emphasis on:
  1. Algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies.
  2. Social issues related to tracking, tracing, and surveillance programs.
  3. Algorithms and frameworks to release privacy-preserving benchmarks and data sets.

Topics

The workshop organizers invite paper submissions on the following (and related) topics:
  • Applications of privacy-preserving AI systems
  • Attacks on data privacy
  • Differential privacy: theory and applications
  • Distributed privacy-preserving algorithms
  • Privacy-preserving Federated learning
  • Human rights and privacy
  • Privacy and Fairness
  • Privacy and causality
  • Privacy-preserving optimization and machine learning
  • Privacy-preserving test cases and benchmarks
  • Surveillance and societal issues

Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.

Format

The workshop will be a one-day meeting. The workshop will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a number of tutorial talks, and will conclude with a panel discussion.


Important Dates

  • November 21, 2022 – Submission Deadline [Extended]
  • November 21, 2022 – NeurIPS/AAAI Fast Track Submission Deadline [Extended]
  • January 2, 2022 – Acceptance Notification
  • January 10, 2023 – Student Scholarship Program Deadline
  • February 13, 2023 – Workshop Date

Submission Information

Submission URL: https://cmt3.research.microsoft.com/PPAI2023

Submission Types

  • Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
  • Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.

NeurIPS/AAAI Fast Track (Rejected AAAI papers)

Rejected NeurIPS/AAAI papers with *average* scores of at least 4.0 may be submitted directly to PPAI along with previous reviews. These submissions may go through a light review process or accepted if the provided reviews are judged to meet the workshop standard.

All papers must be submitted in PDF format, using the AAAI-23 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.

NeurIPS/AAAI fast track papers are subject to the same page limits of standard submissions. Fast track papers should be accompanied by their reviews, submitted as a supplemental material.

For questions about the submission process, contact the workshop chairs.


PPAI-23 scholarship application

PPAI is pleased to announce a Student scholarship program for 2023. The program provides partial travel support for students who are full-time undergraduate or graduate students at colleges and universities; have submitted papers to the workshop program or letters of recommendation from their faculty advisor.

Preference will be given to participating students presenting papers at the workshop or to students from underrepresented countries and communities.

To participate please fill in the Student Scholarship Program application form.

Deadline: January, 10, 2023.


Registration

Registration in each workshop is required by all active participants, and is also open to all interested individuals. Early registration deadline is on January 6th. For more information please refer to AAAI-23 Workshop page.

Program

February, 13, 2023
All times are in Eastern Standard Time (UTC-5).

Time Talk / Presenter
8:50 Introductory remarks
9:00 Invited Talk: Privacy in Image Classification Models: Informed Attacks and Practical Defenses by Borja Balle
9:40 Spotlight Talk: Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints
9:50 Spotlight Talk: Private Ad Modeling with DP-SGD
10:00 Invited Talk: Auditing Differentially Private Machine Learning by Jonathan Ullman
10:40 Break
11:00 Spotlight Talk: An Empirical Analysis of Fairness Notions under Differential Privacy
11:10 Spotlight Talk: SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
11:20 Spotlight Talk: MetaVerSe: Federated Meta-Learning for Versatile and Secure Representations with Dynamic Margins in Embedding Space
11:30 Tutorial: Using and Contributing to the OpenDP Library
13:00 Lunch Break
14:00 Invited Talk: TBA by Christine Task
14:40 Spotlight Talk: Detecting Per-Query Gaps for Differentially Private Synthetic Data
14:50 Spotlight Talk: Marich: A Query-efficient Distributionally-Equivalent Model Extraction Attack using Public Data
15:00 Invited Talk: Do machine learning systems meet the requirements of legal privacy standards? by Kobbi Nissim
15:40 Break
16:00 Panel Discussion: Differential Privacy in real-world deployments: Where are we and what are we missing?
16:45 Concluding Remarks and Poster Session
18:00 End of the Workshop

Accepted Papers

Spotlight Presentations
  • An Empirical Analysis of Fairness Notions under Differential Privacy
    Anderson S de Oliveira (SAP)*; Caelin Kaplan (SAP); Khawla Mallat (SAP); Tanmay Chakraborty (SAP)
  • Private Ad Modeling with DP-SGD
    Carson Denison (Google); Badih Ghazi (Google); Pritish Kamath (Google Research); Ravi Kumar (Google); Pasin Manurangsi (Google); Krishna Giri Narra (Google); Amer Sinha (Google); Avinash Varadarajan (Google AI Healthcare); Chiyuan Zhang (Google)*
  • Marich: A Query-efficient Distributionally-Equivalent Model Extraction Attack using Public Data
    Pratik Karmakar (RKMVERI, Belur, India); Debabrota Basu (Inria)*
  • SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
    Cuong Tran (Syracuse University)*; Ferdinando Fioretto (Syracuse University); Keyu Zhu (Georgia Tech); Pascal Van Hentenryck (Georgia Institute of Technology)
  • Detecting Per-Query Gaps for Differentially Private Synthetic Data
    Shweta Patwa (Duke University)*; Danyu Sun (Duke University); Amir Gilad (Duke University); Ashwin Machanavajjhala (Duke); Sudeepa Roy (Duke University, USA)
  • Pushing the Boundaries of Private, Large-Scale Query Answering with RAP
    Brendan Avent (University of Southern California)*; Aleksandra Korolova (Princeton University)
  • Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints
    Virat Shejwalkar (UMass Amherst)*; Arun Ganesh (Google); Rajiv Mathews (Google); Om Thakkar (Google); Abhradeep Thakurta (Google)
  • Practical Generalizability of DP Synthetic Data Mechanisms
    Brendan Avent (University of Southern California)*; Aleksandra Korolova (Princeton University)
  • MetaVerSe: Federated Meta-Learning for Versatile and Secure Representations with Dynamic Margins in Embedding Space
    Jin Hyuk Lim (Ulsan National Institute of Science and Technology)*; Sung Whan Yoon (Ulsan National Institute of Science and Technology (UNIST))
Poster Presentations
  • Tumult Analytics: a Robust, Easy-to-use, Scalable, and Expressive Framework for Differential Privacy
    Skye Berghel (Tumult Labs); Philip Bohannon (Tumult Labs); Damien Desfontaines (Tumult Labs)*; Charles Estes (Tumult Labs); Sam Haney (Tumult Labs); Luke Hartman (Tumult Labs); Michael Hay (Tumult Labs); Ashwin Machanavajjhala (Tumult Labs); Tom Magerlein (Tumult Labs); Gerome Miklau (Tumult Labs); Amritha Pai (Tumult Labs); William Sexton (Tumult Labs); Ruchit Shrestha (Tumult Labs)
  • Label Inference Attack against Regression Model under Split Learning
    Shangyu Xie (Illinois Institute of Technology)*; Xin Yang (ByteDance Inc.); Yuanshun Yao (ByteDance); Tianyi Liu (ByteDance); Taiqing Wang (ByteDance Ltd); Jiankai Sun (ByteDance)
  • Approximate, Adapt, Anonymize (3A): a Framework for Privacy Preserving Training Data Release for Machine Learning
    Tamas Madl (AWS); Weijie Xu (Amazon)*; Olivia Choudhury (Amazon); Matthew Howard (AWS)
  • Differentially Private Synthetic Data via Zeroth-order Optimization
    Jingwu Tang (Peking University); Terrance Liu (Carnegie Mellon University); Giuseppe Vietri (University of Minnesota)*; Steven Wu (Carnegie Mellon University)
  • How Do Input Attributes Impact the Privacy Loss in Differential Privacy?
    Tamara T. Mueller (Technical University Munich)*; Stefan Kolek (Ludwig Maximilian University of Munich); Friederike Jungmann (Technical University Munich); Alexander Ziller (Technische Universität München); Dmitrii Usynin (Imperial College London); Moritz Knolle (Technische Universität München); Daniel Rueckert (Technische Universität München); Georgios Kaissis (Technische Universität München)
  • Error Maximizing Anti-Sample Generation for Fast Machine Unlearning
    Ayush K Tarun (BITS Pilani); Vikram Singh Chundawat (Birla Institute of Technology and Science, Pilani, Pilani Campus); Murari Mandal (Kalinga Institute of Industrial Technology (KIIT) Bhubaneswar)*; Mohan Kankanhalli (National University of Singapore,)
  • Enhancing Privacy Preservation in Federated Learning via Learning Rate Perturbation
    Guangnian Wan (Beijing University of Posts and Telecommunications)*; Jie Xu (Beijing University of Posts and Telecommunications); Jun Yang (Beijing University of Posts and Telecommunications); Ru Yan (China Mobile Research Institute); Haitao Du (China Mobile Research Institute); Baojiang Cui (Beijing University of Posts and Telecommunications)
  • Black-Box Audits for Group Distribution Shifts
    Marc Juarez (University of Edinburgh)*; Samuel Yeom (Carnegie Mellon University); Matt Fredrikson (Carnegie Mellon University)
  • Data price and quantity decisions for differentially private federated learning in industrial IoT
    Haijun Wang (Zhejiang Lab); Bingjie Lu (Zhejiang Lab); Chongning Na (Zhejiang Lab)*
  • On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence
    Daphnee N Chabal (University of Amsterdam)*; Dolly Sapra (University of Amsterdam); Zoltan Mann (University of Amsterdam)
  • On the adaptive sensitivity of differentially private machine learning
    Filippo Galli (Scuola Normale Superiore)*; Sayan Biswas (INRIA, Ecole Polytechnique); Kangsoo Jung (Inria); Catuscia Palamidessi (Laboratoire d'informatique de l'École polytechnique); Tommaso Cucinotta (Scuola di Studi Superiori Sant'Anna)
  • In-Context Learning as a Simple Baseline for Private Machine Learning
    Simran Arora (Stanford University)*; Christopher Re (Stanford University)
  • Privacy evaluation of fairness-enhancing pre-processing techniques
    Jean-Christophe Taillandier (Université de Montréal); Sébastien Gambs (UQAM)*
  • Does Differential Privacy Impact Bias in Pretrained NLP Models?
    Md Khairul Islam (University of Virginia)*; Andrew J Wang (University of Virginia); Jieyu Zhao (UMD); Yangfeng Ji (University of Virginia); Tianhao Wang (University of Virginia)
  • STONE:When Split meets Transfer for One Shot Learning
    Akshat Agrawal (BITS Pilani, Hyderabad)*; Anirudh Kasturi (BITS Pilani, Hyderabad); Chittaranjan Hota (BITS-Pilani, Hyderabad)
  • Guaranteed Confidence Sets for Differentially Private Convex Optimization
    Krishnamurthy Dvijotham (Google Research)*; Abhradeep Thakurta (Google)
  • Fairly Private: Investigating The Fairness of Visual Privacy Preservation Algorithms
    Sophie Noiret (Vienna University of Technology)*; Siddharth Ravi (University of Alicante); Martin Kampel (Vienna University of Technology, Computer Vision Lab); Francisco Florez-Revuelta (University of Alicante)
  • Rényi Differentially Private Bandits
    Achraf Azize (Inria)*; Debabrota Basu (Inria)
  • Toward Compliance Implications and Security Objectives: A Qualitative Study
    Dmitry Prokhorenkov (TUM)*
  • Local Differential Privacy for Sequential Decision Making in a Changing Environment
    Pratik Gajane (Eindhoven University of Technology)*

Tutorial

Using and Contributing to the OpenDP Library slides

by Mike Shoemate (Harvard) and Salil Vadhan (Harvard).

Abstract:
The OpenDP Project is a community effort to build trustworthy, open-source software tools for analysis of private data. The core of the OpenDP software is the OpenDP Library, which is a modular collection of statistical algorithms that adhere to the definition of differential privacy. It can be used to build applications of privacy-preserving computations, using a number of different models of privacy. OpenDP is implemented in Rust, with bindings for easy use from Python. The architecture of the OpenDP Library is based on a flexible framework for expressing privacy-aware computations, coming from the paper A Programming Framework for OpenDP (Gaboardi, Hay, Vadhan 2020). In this tutorial, we will give a conceptual overview of the OpenDP programming framework. We will then demonstrate common library APIs, and then show how you may incorporate your own differentially private methods into the library through Python and thereby enable their wider use by the OpenDP community. We will also show how community-vetted proofs of the privacy properties of the OpenDP components are integrated into the library’s documentation and contribution process. Finally, we will outline OpenDP’s roadmap for the future, and highlight ways in which you can engage.

Invited Talks

Auditing Differentially Private Machine Learning slides

by Jonathan Ullman (Northeastern University)

Abstract:
A differentially private algorithm comes with a rigorous proof that the algorithm satisfies a strong qualitative and quantitative privacy guarantee, but these stylized mathematical guarantees can both overestimate and underestimate the privacy afforded by the algorithm in a real deployment. In this talk I will motivate and describe my ongoing body of work on using empirical auditing of differentially private machine learning algorithms as a complement to the theory of differential privacy. The talk will discuss how auditing builds on the rich theory and practice of membership-inference attacks, our work on auditing differentially private stochastic gradient descent, and directions for future work.

Bio:
Jonathan Ullman is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. His research centers on privacy for machine learning and statistics, and its surprising connections to topics like statistical validity, robustness, cryptography, and fairness. He has been recognized with an NSF CAREER award and the Ruth and Joel Spira Outstanding Teacher Award.

Privacy in Image Classification Models: Informed Attacks and Practical Defenses slides

by Borja Balle (DeepMind)

Abstract:
In this talk I will discuss two recent works on privacy attacks and differentially private training for image classification models. On the attacks front I will describe a learning-based method capable of extracting complete training images from standard image classification models. Then I will present some recent advances in private training for large image classification models that achieved state-of-the-art results on challenging benchmarks like CIFAR-10 and ImageNet.

Bio:
Borja Balle is a Staff Research Scientist at DeepMind. His current research focuses on privacy-preserving training and privacy auditing for large-scale machine learning systems. He obtained his PhD from Universitat Politècnica de Catalunya in 2013, and then held positions as post-doctoral fellow at McGill University (2013-2015), lecturer at Lancaster University (2015-2017) and machine learning scientist at Amazon Research Cambridge (2017-2019).

Do machine learning systems meet the requirements of legal privacy standards? slides

by Kobbi Nissim (Georgetown University)

Abstract:
Machine learning systems are widely used in the processing of personal information, and their use is growing at a rapid pace. While these systems bring many benefits, they also raise significant concerns about privacy. To mitigate such concerns, technical-mathematical frameworks such as differential privacy and legal frameworks such as the EU’s General Data Protection Regulation (GDPR) have been introduced. However, the relationship between privacy technology and privacy law is complex and the interaction between the two approaches exposes significant differences, making it challenging to reason whether systems do or do not provide the level of privacy protection as set by privacy law. In this talk, we will review some of the gaps that exist between mathematical and legal approaches to privacy, and ongoing efforts to bridge them while maintaining legal and mathematical rigor.

Bio:
Kobbi Nissim is the McDevitt Chair in Computer Science at Georgetown University and an affiliate professor at Georgetown Law. His work from 2003 and 2004 with Dinur and Dwork initiated rigorous foundational research of privacy and in 2006 he introduced Differential Privacy with Dwork, McSherry and Smith. Nissim was awarded the Paris Kanellakis Theory and Practice Award in 2021, the Godel Prize In 2017, the IACR TCC Test of Time Award in 2016 and 2018, and the ACM PODS Alberto O. Mendelzon Test-of-Time Award in 2013. He studied at the Weizmann Institute with Prof. Moni Naor.

Title TBA

by Christine Task (Knexus Coropration)

Abstract:
TBA

Bio:
TBA

PPAI-22 Panel:

Differential Privacy in real-world deployments: Where are we and what are we missing?

Panelists

Ashwin Machanavajjhala

Duke University and Tumult Labs

Giulia Fanti

Carnegie Mellon University

Michael B. Hawes

U.S. Census Bureau

Bryant Gipson

Google


Invited Speakers

Jonathan Ullman

Northeastern University

Talk details

Kobbi Nissim

Georgetown University

Talk details

Borja Balle

DeepMind

Talk details

Christine Task

Knexus Coropration

Talk details

Salil Vadhan

Harvard

Talk details

Sponsors

Google
Gold Sponsor
Google
Gold Sponsor



Code of Conduct

PPAI 2023 is committed to providing an atmosphere that encourages freedom of expression and exchange of ideas. It is the policy of PPAI 2023 that all participants will enjoy a welcoming environment free from unlawful discrimination, harassment, and retaliation.

Harassment will not be tolerated in any form, including but not limited to harassment based on gender, gender identity and expression, sexual orientation, disability, physical appearance, race, age, religion or any other status. Harassment includes the use of abusive, offensive or degrading language, intimidation, stalking, harassing photography or recording, inappropriate physical contact, sexual imagery and unwelcome sexual advances. Participants asked to stop any harassing behavior are expected to comply immediately.

Violations should be reported to the workshop chairs. All reports will be treated confidentially. The conference committee will deal with each case separately. Sanctions include, but are not limited to, exclusion from the workshop, removal of material from the online record of the conference, and referral to the violator’s university or employer. All PPAI 2023 attendees are expected to comply with these standards of behavior.


Program Committee

  • Abra Ganz (University of Amsterdam)
  • Ajinkya Mulay (Purdue University)
  • Akshat Agrawal (BITS Pilani, Hyderabad)
  • Amrita Roy Chowdhury (University of Wisconsin-Madison)
  • Anderson de Oliveira (SAP)
  • Andrei Paleyes (University of Cambridge)
  • Antti Honkela (University of Helsinki)
  • Aurélien Bellet (INRIA)
  • Aws Albarghouthi (University of Wisconsin-Madison)
  • Ayush Tarun (BITS Pilani)
  • Borja Balle (DeepMind)
  • Chittaranjan Hota (BITS-Pilani, Hyderabad)
  • Chongning Na (Zhejiang Lab)
  • Christine Task (Knexus Research)
  • Cuong Tran (Syracuse University)
  • Damien Desfontaines (Tumult Labs)
  • Daphnee Chabal (University of Amsterdam)
  • Di Wang (KAUST)
  • Dmitrii Usynin (Imperial College London)
  • Dolly Sapra (University of Amsterdam)
  • Edward Raff (Booz Allen Hamilton)
  • Fatemeh Mireshghallah (University of California, San Diego)
  • Filippo Galli (Scuola Normale Superiore)
  • Haijun Wang (Zhejiang Lab)
  • Héber H. Arcolezi (Inria and École Polytechnique (IPP))
  • Ivan Habernal (Technical University of Darmstadt)
  • Jan Ramon (INRIA)
  • Jianfeng Chi (University of Virginia)
  • Jiankai Sun (ByteDance)
  • Jiayuan Ye (National University of SIngapore)
  • Jingwu Tang (Peking University)
  • Jonathan Passerat-Palmbach (Imperial College London / ConsenSys Health)
  • Julien Ferry (LAAS-CNRS)
  • Kangsoo Jung (Inria)
  • Keyu Zhu (Georgia Tech)
  • krystal maughan (University of Vermont)
  • M. Hadi Amini (International University)
  • Marc Juarez (University of Edinburgh)
  • Marco Gaboradi (Boston University)
  • Marco Romanelli (NYU Tandon School of Engineering)
  • Martin Kampel (Vienna University of Technology )
  • Michael Hay (Colgate University & Tumult Labs)
  • Mohammad Mahdi Khalili (University of Delaware )
  • Navid Kagalwalla (Carnegie Mellon University)
  • Olukayode Akanni (Near East University)
  • Pasin Manurangsi (Google)
  • Pratik Karmakar (RKMVERI)
  • Rakshit Naidu (Carnegie Mellon University)
  • Ranya Aloufi (Imperial College)
  • Saeyoung Rho (Columbia University)
  • Sahib Singh (OpenMined; Ford R&A)
  • Santiago Zanella-Beguelin (Microsoft Research)
  • Sayan Biswas (INRIA, Ecole Polytechnique)
  • Sébastien Gambs (UQAM)
  • Seng Pei Liew (LINE Corporation)
  • Shangyu Xie (Illinois Institute of Technology)
  • Simran Arora (Stanford University)
  • Sung Whan Yoon (Ulsan National Institute of Science and Technology)
  • Terrence W.K. Mak (Georgia Institute of Technology)
  • Tianhao Wang (University of Virginia)
  • Tsubasa Takahashi (LINE Corporation)
  • Vikram Chundawat (Birla Institute of Technology and Science)
  • Virat Shejwalkar (UMass Amherst)
  • Xiaoting Zhao (Etsy)
  • Yuanshun Yao (ByteDance)
  • Yuliia Lut (Columbia University)
  • Yunwen Lei (University of Birmingham)
  • Yuya Sasaki (Osaka University)

Workshop Chairs

Ferdinando Fioretto

Syracuse University

ffiorett@syr.edu

Catuscia Palamidessi

Inria, Ecole Polytechnique

catuscia@lix.polytechnique.fr

Pascal Van Hentenryck

Georgia Institute of Technology

pvh@isye.gatech.edu