SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore
co-located with ICFP/SPLASH 2025

This program is tentative and subject to change.

Thu 16 Oct 2025 16:00 - 16:15 at Orchid Plenary Ballroom - Parallelism

Tensor algebra is a crucial component for data-intensive workloads such as machine learning and scientific computing. As the complexity of data grows, scientists often encounter a dilemma between the highly specialized dense tensor algebra and efficient structure-aware algorithms provided by sparse tensor algebra. In this paper, we introduce DASTAC, a framework to propagate the tensors’s captured high-level structure down to low-level code generation by incorporating techniques such as automatic data layout compression, polyhedral analysis, and affine code generation. Our methodology reduces memory footprint by automatically detecting the best data layout, heavily benefits from polyhedral optimizations, leverages further optimizations, and enables parallelization through MLIR. Through extensive experimentation, we show that DASTAC achieves 1 to 2 orders of magnitude speedup over TACO, a state-of-the-art sparse tensor compiler, hand-tuned specialized expert code, and StructTensor, a state-of-the-art structured tensor algebra compiler, with a significantly lower memory footprint.

This program is tentative and subject to change.

Thu 16 Oct

Displayed time zone: Perth change

16:00 - 17:30
16:00
15m
Talk
Compressed and Parallelized Structured Tensor Algebra
OOPSLA
Mahdi Ghorbani University of Edinburgh, Emilien Bauer , Tobias Grosser University of Cambridge, Amir Shaikhha University of Edinburgh
16:15
15m
Talk
Exploring the Theory and Practice of Concurrency in the Entity-Component-System Pattern
OOPSLA
Patrick Redmond University of California, Santa Cruz, Jonathan Castello University of California, Santa Cruz, Jose Calderon Galois, Inc., Lindsey Kuper University of California, Santa Cruz
Pre-print
16:30
15m
Talk
HieraSynth: A Parallel Framework for Complete Super-Optimization with Hierarchical Space Decomposition
OOPSLA
Sirui Lu OpenAI, Rastislav Bodík Google Research, Brain Team
16:45
15m
Talk
Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness
OOPSLA
Dongjae Lee Massachusetts Institute of Technology, Janggun Lee KAIST, Taeyoung Yoon Seoul National University, Minki Cho Seoul National University, Jeehoon Kang FuriosaAI, Chung-Kil Hur Seoul National University
17:00
15m
Talk
Opportunistically Parallel Lambda Calculus
OOPSLA
Stephen Mell University of Pennsylvania, Konstantinos Kallas University of California, Los Angeles, Steve Zdancewic University of Pennsylvania, Osbert Bastani University of Pennsylvania
17:15
15m
Talk
Soundness of Predictive Concurrency Analyses
OOPSLA
Shuyang Liu , Doug Lea State University of New York (SUNY) Oswego, Jens Palsberg University of California, Los Angeles (UCLA)