SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore
co-located with ICFP/SPLASH 2025
Sat 18 Oct 2025 11:00 - 11:15 at Orchid West - Verification 2 Chair(s): Sukyoung Ryu

Neural networks are susceptible to privacy attacks that can extract private information of the training set. To cope, several training algorithms guarantee differential privacy (DP) by adding noise to their computation. However, DP requires to add noise considering \emph{every possible} training set. This leads to a significant decrease in the network’s accuracy. Individual DP (iDP) restricts DP to a \emph{given} training set. We observe that some inputs deterministically satisfy iDP \emph{without any noise}. By identifying them, we can provide iDP label-only access to the network with a minor decrease to its accuracy. However, identifying the inputs that satisfy iDP without any noise is highly challenging. Our key idea is to compute the \emph{iDP deterministic bound} (iDP-DB), which overapproximates the set of inputs that do not satisfy iDP, and add noise only to their predicted labels. To compute the tightest iDP-DB, which enables to guard the label-only access with minimal accuracy decrease, we propose LUCID, which leverages several formal verification techniques. First, it encodes the problem as a mixed-integer linear program, defined over a network and over every network trained identically but without a unique data point. Second, it abstracts a set of networks using a \emph{hyper-network}. Third, it eliminates the overapproximation error via a novel branch-and-bound technique. Fourth, it bounds the differences of matching neurons in the network and the hyper-network, encodes them as linear constraints to prune the search space, and employs linear relaxation if they are small. We evaluate LUCID on fully-connected and convolutional networks for four datasets and compare the results to existing DP training algorithms, which in particular provide iDP guarantees. We show that LUCID can provide classifiers with a perfect individuals’ privacy guarantee ($0$-iDP) – which is infeasible for DP training algorithms – with an accuracy decrease of 1.4%. For more relaxed $\varepsilon$-iDP guarantees, LUCID has an accuracy decrease of 1.2%. In contrast, existing DP training algorithms that obtain $\varepsilon$-DP guarantees, and in particular $\varepsilon$-iDP guarantees, reduce the accuracy by 12.7%.

Sat 18 Oct

Displayed time zone: Perth change

10:30 - 12:15
Verification 2OOPSLA at Orchid West
Chair(s): Sukyoung Ryu KAIST
10:30
15m
Talk
FO-Complete Program Verification for Heap Logics
OOPSLA
Adithya Murali University of Illinois at Urbana-Champaign, Hrishikesh Balakrishnan University of Illinois Urbana-Champaign, Aaron Councilman Univ of Illinois Urbana-Champaign, P. Madhusudan University of Illinois at Urbana-Champaign
10:45
15m
Talk
Foundations for Deductive Verification of Continuous Probabilistic Programs: From Lebesgue to Riemann and Back
OOPSLA
Kevin Batz RWTH Aachen University, Joost-Pieter Katoen RWTH Aachen University, Francesca Randone Department of Mathematics, Informatics and Geosciences, University of Trieste, Italy, Tobias Winkler RWTH Aachen University
11:00
15m
Talk
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via Formal Verification
OOPSLA
Anan Kabaha Technion, Israel Institute of Technology, Dana Drachsler Cohen Technion
11:15
15m
Talk
KestRel: Relational Verification Using E-Graphs for Program Alignment
OOPSLA
Robert Dickerson Purdue University, Prasita Mukherjee Purdue University, Benjamin Delaware Purdue University
11:30
15m
Talk
Laurel: Unblocking Automated Verification with Large Language Models
OOPSLA
Eric Mugnier University of California San Diego, Emmanuel Anaya Gonzalez UCSD, Nadia Polikarpova University of California at San Diego, Ranjit Jhala University of California at San Diego, Zhou Yuanyuan UCSD
11:45
15m
Talk
Scaling Instruction-Selection Verification against Authoritative ISA Semantics
OOPSLA
Michael McLoughlin Carnegie Mellon University, Ashley Sheng Wellesley College, Chris Fallin F5, Bryan Parno Carnegie Mellon University, Fraser Brown CMU, Alexa VanHattum Wellesley College
DOI
12:00
15m
Talk
Verification of Bit-Flip Attacks against Quantized Neural Networks
OOPSLA
Yedi Zhang National University of Singapore, Lei Huang ShanghaiTech University, Pengfei Gao ByteDance, Fu Song Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; Nanjing Institute of Software Technology, Jun Sun Singapore Management University, Jin Song Dong National University of Singapore