SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore
co-located with ICFP/SPLASH 2025

This program is tentative and subject to change.

Sat 18 Oct 2025 12:00 - 12:15 at Orchid West - Verification 2

In the rapidly evolving landscape of neural network security, the resilience of neural networks against bit-flip attacks (i.e., an attacker maliciously flips an extremely small amount of bits within its parameter storage memory system to induce harmful behavior), has emerged as a relevant area of research. Existing studies suggest that quantization may serve as a viable defense against such attacks. Recognizing the documented susceptibility of real-valued neural networks to such attacks and the comparative robustness of quantized neural networks (QNNs), in this work, we introduce BFAVerifier, the first verification framework designed to formally verify the absence of bit-flip attacks or to identify all vulnerable parameters in a sound and rigorous manner. BFAVerifier comprises two integral components: an abstraction-based method and an MILP-based method. Specifically, we first conduct a reachability analysis with respect to symbolic parameters that represent the potential bit-flip attacks, based on a novel abstract domain with a sound guarantee. If the reachability analysis fails to prove the resilience of such attacks, then we encode this verification problem into an equivalent MILP problem which can be solved by off-the-shelf solvers. Therefore, BFAVerifier is sound, complete, and reasonably efficient. We conduct extensive experiments, which demonstrate its effectiveness and efficiency across various network architectures, quantization bit-widths, and adversary capabilities. The source code of our tool and benchmarks are available at https://anonymous.4open.science/r/BFAVerifier-B315.

This program is tentative and subject to change.

Sat 18 Oct

Displayed time zone: Perth change

10:30 - 12:15
Verification 2OOPSLA at Orchid West
10:30
15m
Talk
FO-Complete Program Verification for Heap Logics
OOPSLA
Adithya Murali University of Illinois at Urbana-Champaign, Hrishikesh Balakrishnan University of Illinois Urbana-Champaign, Aaron Councilman Univ of Illinois Urbana-Champaign, P. Madhusudan University of Illinois at Urbana-Champaign
10:45
15m
Talk
Foundations for Deductive Verification of Continuous Probabilistic Programs: From Lebesgue to Riemann and Back
OOPSLA
Kevin Batz RWTH Aachen University, Joost-Pieter Katoen RWTH Aachen University, Francesca Randone Department of Mathematics, Informatics and Geosciences, University of Trieste, Italy, Tobias Winkler RWTH Aachen University
11:00
15m
Talk
Guarding the Privacy of Label-Only Access to Neural Network Classifiers via Formal Verification
OOPSLA
Anan Kabaha Technion, Israel Institute of Technology, Dana Drachsler Cohen Technion
11:15
15m
Talk
KestRel: Relational Verification Using E-Graphs for Program Alignment
OOPSLA
Robert Dickerson Purdue University, Prasita Mukherjee Purdue University, Benjamin Delaware Purdue University
11:30
15m
Talk
Laurel: Unblocking Automated Verification with Large Language Models
OOPSLA
Eric Mugnier University of California San Diego, Emmanuel Anaya Gonzalez UCSD, Nadia Polikarpova University of California at San Diego, Ranjit Jhala University of California at San Diego, Zhou Yuanyuan UCSD
11:45
15m
Talk
Scaling Instruction-Selection Verification against Authoritative ISA Semantics
OOPSLA
Michael McLoughlin Carnegie Mellon University, Ashley Sheng Wellesley College, Chris Fallin F5, Bryan Parno Carnegie Mellon University, Fraser Brown CMU, Alexa VanHattum Wellesley College
12:00
15m
Talk
Verification of Bit-Flip Attacks against Quantized Neural Networks
OOPSLA
Yedi Zhang National University of Singapore, Lei Huang ShanghaiTech University, Pengfei Gao ByteDance, Fu Song Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; Nanjing Institute of Software Technology, Jun Sun Singapore Management University, Jin Song Dong National University of Singapore