The Fourth AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-23)
February 13, 2023PPAI is an in-person event at: Walter E. Washington Convention Center - Washington, DC, USA. Room 147A
PPAI will also be live-streamed at underline.io
The availability of massive amounts of data, coupled with high-performance cloud computing
platforms, has driven significant progress in artificial intelligence and, in particular,
machine learning and optimization. It has profoundly impacted several areas, including computer
vision, natural language processing, and transportation. However, the use of rich data sets
also raises significant privacy concerns: They often reveal personal sensitive information
that can be exploited, without the knowledge and/or consent of the involved individuals, for
various purposes including monitoring, discrimination, and illegal activities.
In its fourth edition the, AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-23)
provides a platform for researchers, AI practitioners, and policymakers to discuss technical
and societal issues and present solutions related to privacy in AI applications.
The workshop will focus on both the theoretical and practical challenges related to the design
of privacy-preserving AI systems and algorithms and will have strong multidisciplinary
components, including soliciting contributions about policy, legal issues, and societal
impact of privacy in AI.
Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
The workshop will be a one-day meeting. The workshop will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a number of tutorial talks, and will conclude with a panel discussion.
Submission URL: https://cmt3.research.microsoft.com/PPAI2023
Rejected NeurIPS/AAAI papers with *average* scores of at least 4.0 may be submitted directly to PPAI along with previous reviews. These submissions may go through a light review process or accepted if the provided reviews are judged to meet the workshop standard.
All papers must be submitted in PDF format, using the AAAI-23 author kit.
Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and
clarity. Each submission will be thoroughly reviewed by at least two program committee members.
NeurIPS/AAAI fast track papers are subject to the same page limits of standard submissions. Fast track papers should be accompanied by their reviews, submitted as a supplemental material.
For questions about the submission process, contact the workshop chairs.
PPAI is pleased to announce a Student scholarship program for 2023. The program provides partial
travel support for students who are full-time undergraduate or graduate students at colleges and universities;
have submitted papers to the workshop program or letters of recommendation from their faculty advisor.
Preference will be given to participating students presenting papers at the workshop or to students from underrepresented countries and communities.
To participate please fill in the Student Scholarship Program application form.
Deadline: January, 10, 2023.
Time | Talk / Presenter | |
---|---|---|
8:50 | Introductory remarks | |
9:00 | Invited Talk: Privacy in Image Classification Models: Informed Attacks and Practical Defenses by Borja Balle | |
9:40 | Spotlight Talk: Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints | |
9:50 | Spotlight Talk: Private Ad Modeling with DP-SGD | |
10:00 | Invited Talk: Auditing Differentially Private Machine Learning by Jonathan Ullman | |
10:40 | Break | |
11:00 | Spotlight Talk: An Empirical Analysis of Fairness Notions under Differential Privacy | |
11:10 | Spotlight Talk: SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles | |
11:20 | Spotlight Talk: MetaVerSe: Federated Meta-Learning for Versatile and Secure Representations with Dynamic Margins in Embedding Space | |
11:30 | Tutorial: Using and Contributing to the OpenDP Library | |
13:00 | Lunch Break | |
14:00 | Invited Talk: TBA by Christine Task | |
14:40 | Spotlight Talk: Detecting Per-Query Gaps for Differentially Private Synthetic Data | |
14:50 | Spotlight Talk: Marich: A Query-efficient Distributionally-Equivalent Model Extraction Attack using Public Data | |
15:00 | Invited Talk: Do machine learning systems meet the requirements of legal privacy standards? by Kobbi Nissim | |
15:40 | Break | |
16:00 | Panel Discussion: Differential Privacy in real-world deployments: Where are we and what are we missing? | |
16:45 | Concluding Remarks and Poster Session | |
18:00 | End of the Workshop |
Abstract:
The OpenDP Project is a community effort to build trustworthy, open-source software tools for analysis of private data. The core of the OpenDP software is the OpenDP Library, which is a modular collection of statistical algorithms that adhere to the definition of differential privacy. It can be used to build applications of privacy-preserving computations, using a number of different models of privacy. OpenDP is implemented in Rust, with bindings for easy use from Python. The architecture of the OpenDP Library is based on a flexible framework for expressing privacy-aware computations, coming from the paper A Programming Framework for OpenDP (Gaboardi, Hay, Vadhan 2020).
In this tutorial, we will give a conceptual overview of the OpenDP programming framework. We will then demonstrate common library APIs, and then show how you may incorporate your own differentially private methods into the library through Python and thereby enable their wider use by the OpenDP community. We will also show how community-vetted proofs of the privacy properties of the OpenDP components are integrated into the library’s documentation and contribution process. Finally, we will outline OpenDP’s roadmap for the future, and highlight ways in which you can engage.
Abstract:
A differentially private algorithm comes with a rigorous proof that the algorithm satisfies a strong qualitative and quantitative privacy guarantee, but these stylized mathematical guarantees can both overestimate and underestimate the privacy afforded by the algorithm in a real deployment. In this talk I will motivate and describe my ongoing body of work on using empirical auditing of differentially private machine learning algorithms as a complement to the theory of differential privacy. The talk will discuss how auditing builds on the rich theory and practice of membership-inference attacks, our work on auditing differentially private stochastic gradient descent, and directions for future work.
Bio:
Jonathan Ullman is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. His research centers on privacy for machine learning and statistics, and its surprising connections to topics like statistical validity, robustness, cryptography, and fairness. He has been recognized with an NSF CAREER award and the Ruth and Joel Spira Outstanding Teacher Award.
Abstract:
In this talk I will discuss two recent works on privacy attacks and differentially private training for image classification models. On the attacks front I will describe a learning-based method capable of extracting complete training images from standard image classification models. Then I will present some recent advances in private training for large image classification models that achieved state-of-the-art results on challenging benchmarks like CIFAR-10 and ImageNet.
Bio:
Borja Balle is a Staff Research Scientist at DeepMind. His current research focuses on privacy-preserving training and privacy auditing for large-scale machine learning systems. He obtained his PhD from Universitat Politècnica de Catalunya in 2013, and then held positions as post-doctoral fellow at McGill University (2013-2015), lecturer at Lancaster University (2015-2017) and machine learning scientist at Amazon Research Cambridge (2017-2019).
Abstract:
Machine learning systems are widely used in the processing of personal information, and their use is growing at a rapid pace. While these systems bring many benefits, they also raise significant concerns about privacy. To mitigate such concerns, technical-mathematical frameworks such as differential privacy and legal frameworks such as the EU’s General Data Protection Regulation (GDPR) have been introduced.
However, the relationship between privacy technology and privacy law is complex and the interaction between the two approaches exposes significant differences, making it challenging to reason whether systems do or do not provide the level of privacy protection as set by privacy law.
In this talk, we will review some of the gaps that exist between mathematical and legal approaches to privacy, and ongoing efforts to bridge them while maintaining legal and mathematical rigor.
Bio:
Kobbi Nissim is the McDevitt Chair in Computer Science at Georgetown University and an affiliate professor at Georgetown Law. His work from 2003 and 2004 with Dinur and Dwork initiated rigorous foundational research of privacy and in 2006 he introduced Differential Privacy with Dwork, McSherry and Smith. Nissim was awarded the Paris Kanellakis Theory and Practice Award in 2021, the Godel Prize In 2017, the IACR TCC Test of Time Award in 2016 and 2018, and the ACM PODS Alberto O. Mendelzon Test-of-Time Award in 2013. He studied at the Weizmann Institute with Prof. Moni Naor.
Abstract:
TBA
Bio:
TBA