SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore
co-located with ICFP/SPLASH 2025

This program is tentative and subject to change.

Thu 16 Oct 2025 16:30 - 16:45 at Orchid East - Compilation 2

Multiple frameworks and optimizations have been proposed for accelerating Graph Neural Network (GNN) workloads over the years, achieving sizable runtime performance improvements. However, we notice that existing systems usually explore optimizing either at the intra-operator level or at the inter-operator level, missing synergies that exist due to their compositions. Further, most existing works focus primarily on optimizing the forward computation of GNNs, often overlooking opportunities for training-specific optimizations.

To exploit these missed optimization opportunities, we introduce GALA, a domain-specific language (DSL) and a compiler that allows composing optimizations at different levels. The GALA DSL exposes intra-operator transformations as scheduling commands, while we introduce novel inter-operator transformations as part of the compiler. The composition of these transformations is made possible through the introduction of two novel intermediate representations (IR) in the GALA compiler that tracks and composes transformations at both the intra- and inter-operator levels. Further, the IRs maintain a global view of the GNN program, including its training process. This allows us to introduce training-specific transformations to aggressively optimize GNN training. Our evaluations show that GALA achieves a geo-mean speedup of 2.55× for inference and 2.52× for training across multiple systems, graphs, and GNN models. We also show that GALA performs well across different graph sizes and GNN model configurations, as well as allows users to explore different methods of performing similar optimizations leading to different tradeoff spaces.

This program is tentative and subject to change.

Thu 16 Oct

Displayed time zone: Perth change

16:00 - 17:30
Compilation 2OOPSLA at Orchid East
16:00
15m
Talk
An Empirical Study of Bugs in the rustc Compiler
OOPSLA
Zixi Liu Nanjing University, Yang Feng Nanjing University, Yunbo Ni The Chinese University of Hong Kong, Shaohua Li The Chinese University of Hong Kong, Xizhe Yin Nanjing University, Qingkai Shi Nanjing University, Baowen Xu Nanjing University, Zhendong Su ETH Zurich
16:15
15m
Talk
DESIL: Detecting Silent Bugs in MLIR Compiler Infrastructure
OOPSLA
Chenyao Suo Tianjin University, Jianrong Wang Tianjin University, Yongjia Wang College of Intelligence and Computing, Tianjin University, Jiajun Jiang Tianjin University, Qingchao Shen Tianjin University, Junjie Chen Tianjin University
16:30
15m
Talk
GALA: A High Performance Graph Neural Network Acceleration LAnguage and Compiler
OOPSLA
Damitha Lenadora University of Illinois at Urbana-Champaign, Nikhil Jayakumar University of Texas at Austin, Chamika Sudusinghe University of Illinois at Urbana-Champaign, Charith Mendis University of Illinois at Urbana-Champaign
16:45
15m
Talk
Non-interference Preserving Optimising Compilation
OOPSLA
Julian Rosemann Saarland University, Saarland Informatics Campus, Sebastian Hack Saarland University, Saarland Informatics Campus, Deepak Garg MPI-SWS
17:00
15m
Talk
Synchronized Behavior Checking: A Method for Finding Missed Compiler Optimizations
OOPSLA
Yi Zhang Nanjing University, Yu Wang Nanjing University, Linzhang Wang Nanjing University, Ke Wang Peking University
17:15
15m
Talk
Tabby: A Synthesis-Aided Compiler for High-Performance Zero-Knowledge Proof Circuits
OOPSLA
Junrui Liu University of California, Santa Barbara, Jiaxin Song University of Illinois at Urbana-Champaign, Yanning Chen University of Toronto, Hanzhi Liu University of California, Santa Barbara & Riema Labs, Hongbo Wen University of California, Santa Barbara & Riema Labs, Luke Pearson Polychain Capital, Yanju Chen University of California, San Diego, Yu Feng University of California at Santa Barbara