SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention
This program is tentative and subject to change.
Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA) performance across natural language processing and vision tasks. However, their quadratic dependence on sequence lengths has bottlenecked inference speeds. To circumvent this bottleneck, researchers have proposed various sparse-MHSA models, where a subset of full attention is computed. Despite their promise, current sparse libraries and compilers do not support high-performance implementations for $\textit{diverse}$ sparse-MHSA patterns due to the underlying sparse formats they operate on. On one end, sparse libraries operate on $\textit{general sparse formats}$ which target extreme amounts of random sparsity (<1% non-zero values) and have high metadata in $O(nnzs)$. On the other end, hand-written kernels operate on $\textit{custom sparse formats}$ which target specific sparse-MHSA patterns. However, the sparsity patterns in sparse-MHSA are moderately sparse (10-50% non-zero values) and varied, resulting in general sparse formats incurring high metadata overhead and custom sparse formats covering few sparse-MSHA patterns, trading off generality for performance.
We bridge this gap, achieving both generality and performance, by proposing a novel sparse format: affine-compressed-sparse-row (ACSR) and supporting code-generation scheme, SPLAT, that generates high-performance implementations for diverse sparse-MHSA patterns on GPUs. Core to our proposed format and code generation algorithm is the observation that common sparse-MHSA patterns have uniquely regular geometric properties. These properties, which can be analyzed just-in-time, expose novel optimizations and tiling strategies that SPLAT exploits to generate high-performance implementations for diverse patterns. To demonstrate SPLAT’s efficacy, we use it to generate code for various sparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x over hand-written kernels written in triton and TVM respectively on A100 GPUs. Moreover, its interfaces are intuitive and easy to use with existing implementations of MHSA in JAX.
This program is tentative and subject to change.
Fri 17 OctDisplayed time zone: Perth change
10:30 - 12:15 | |||
10:30 15mTalk | Efficient Algorithms for the Uniform Tokenization Problem OOPSLA | ||
10:45 15mTalk | REPTILE: Performant Tiling of Recurrences OOPSLA Muhammad Usman Tariq Stanford University, Shiv Sundram Stanford University, Fredrik Kjolstad Stanford University | ||
11:00 15mTalk | SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention OOPSLA Ahan Gupta University of Illinois at Urbana-Champaign, Yueming Yuan University of Illinois Urbana-Champaign, Devansh Jain University of Illinois at Urbana-Champaign, Yuhao Ge University of Illinois at Urbana-Champaign, David Aponte Microsoft, Yanqi Zhou Google, Charith Mendis University of Illinois at Urbana-Champaign | ||
11:15 15mTalk | Statically Analyzing the Dataflow of R Programs OOPSLA | ||
11:30 15mTalk | Static Inference of Regular Grammars for Ad Hoc Parsers OOPSLA Pre-print | ||
11:45 15mTalk | Syntactic Completions with Material Obligations OOPSLA David Moon University of Michigan, Andrew Blinn University of Michigan, Thomas J. Porter University of Michigan, Cyrus Omar University of Michigan DOI Pre-print |