SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore
co-located with ICFP/SPLASH 2025

This program is tentative and subject to change.

Thu 16 Oct 2025 14:30 - 14:45 at Orchid West - Reasoning

Fairness in machine learning is more important than ever as ethical concerns continue to grow. Individual fairness demands that individuals differing only in sensitive attributes receive the same outcomes. However, commonly used machine learning algorithms often fail to achieve such fairness. To improve individual fairness, various training methods have been developed, such as incorporating fairness constraints as optimisation objectives. While these methods have demonstrated empirical effectiveness, they lack formal guarantees of fairness. Existing approaches that aim to provide fairness guarantees primarily rely on verification techniques, which can sometimes fail to produce definitive results. Moreover, verification alone does not actively enhance individual fairness during training. To address this limitation, we propose a novel framework that formally guarantees individual fairness throughout training. Our approach consists of two parts, i.e., (1) provably fair initialisation that ensures the model starts in a fair state, and (2) a fairness-preserving training algorithm that maintains fairness as the model learns. A key element of our method is the use of randomised response mechanisms, which protect sensitive attributes while maintaining fairness guarantees. We formally prove that this mechanism sustains individual fairness throughout the training process. Experimental evaluations confirm that our approach is effective, \emph{i.e.}, producing models that are empirically fair and accurate. Furthermore, our approach is much more efficient than the alternative approach based on certified training (which requires neural network verification during training).

This program is tentative and subject to change.

Thu 16 Oct

Displayed time zone: Perth change

13:45 - 15:30
ReasoningOOPSLA at Orchid West
13:45
15m
Talk
Characterizing Implementability of Global Protocols with Infinite States and Data
OOPSLA
Elaine Li NYU, Felix Stutz University of Luxembourg, Luxembourg, Thomas Wies New York University, Damien Zufferey SonarSource
14:00
15m
Talk
Checking Observational Correctness of Database Systems
OOPSLA
Lauren Pick The Chinese University of Hong Kong, Amanda Xu University of Wisconsin-Madison, Ankush Desai Amazon Web Services, Sanjit A. Seshia University of California, Berkeley, Aws Albarghouthi University of Wisconsin-Madison
14:15
15m
Talk
Correct Black-Box Monitors for Distributed Deadlock Detection: Formalisation and Implementation
OOPSLA
Radosław Jan Rowicki Technical University of Denmark, Adrian Francalanza University of Malta, Alceste Scalas Technical University of Denmark
DOI Pre-print
14:30
15m
Talk
Correct-By-Construction: Certified Individual Fairness through Neural Network Training
OOPSLA
Ruihan Zhang Singapore Management University (SMU), Jun Sun Singapore Management University
14:45
15m
Talk
Modular Reasoning about Global Variables and Their Initialization
OOPSLA
João Pereira ETH Zurich, Isaac van Bakel ETH Zurich, Patricia Firlejczyk ETH Zurich, Marco Eilers ETH Zurich, Peter Müller ETH Zurich
15:00
15m
Talk
P³: Reasoning about Patches via Product Programs
OOPSLA
Arindam Sharma Imperial College London, Daniel Schemmel Imperial College London, Cristian Cadar Imperial College London
15:15
15m
Talk
Reasoning about External Calls
OOPSLA
Julian Mackay Kry10 Ltd, Sophia Drossopoulou Imperial College London, James Noble Independent. Wellington, NZ, Susan Eisenbach Imperial College London