Compressed and Parallelized Structured Tensor Algebra
Tensor algebra is a crucial component for data-intensive workloads such as machine learning and scientific computing. As the complexity of data grows, scientists often encounter a dilemma between the highly specialized dense tensor algebra and efficient structure-aware algorithms provided by sparse tensor algebra. In this paper, we introduce DASTAC, a framework to propagate the tensors’s captured high-level structure down to low-level code generation by incorporating techniques such as automatic data layout compression, polyhedral analysis, and affine code generation. Our methodology reduces memory footprint by automatically detecting the best data layout, heavily benefits from polyhedral optimizations, leverages further optimizations, and enables parallelization through MLIR. Through extensive experimentation, we show that DASTAC achieves 1 to 2 orders of magnitude speedup over TACO, a state-of-the-art sparse tensor compiler, hand-tuned specialized expert code, and StructTensor, a state-of-the-art structured tensor algebra compiler, with a significantly lower memory footprint.
Thu 16 OctDisplayed time zone: Perth change
| 16:00 - 17:30 | |||
| 16:0015m Talk | Compressed and Parallelized Structured Tensor Algebra OOPSLA Mahdi Ghorbani University of Edinburgh, Emilien Bauer University of Edinburgh, Tobias Grosser University of Cambridge, Amir Shaikhha University of Edinburgh | ||
| 16:1515m Talk | Exploring the Theory and Practice of Concurrency in the Entity-Component-System Pattern OOPSLA Patrick Redmond University of California, Santa Cruz, Jonathan Castello University of California, Santa Cruz, Jose Calderon Galois, Inc., Lindsey Kuper University of California, Santa CruzPre-print | ||
| 16:3015m Talk | HieraSynth: A Parallel Framework for Complete Super-Optimization with Hierarchical Space Decomposition OOPSLA | ||
| 16:4515m Talk | Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness OOPSLA Dongjae Lee Massachusetts Institute of Technology, Janggun Lee KAIST, Taeyoung Yoon Seoul National University, Minki Cho Seoul National University, Jeehoon Kang FuriosaAI, Chung-Kil Hur Seoul National University | ||
| 17:0015m Talk | Opportunistically Parallel Lambda Calculus OOPSLA Stephen Mell University of Pennsylvania, Konstantinos Kallas University of California, Los Angeles, Steve Zdancewic University of Pennsylvania, Osbert Bastani University of Pennsylvania | ||
| 17:1515m Talk | Soundness of Predictive Concurrency Analyses OOPSLA Shuyang Liu , Doug Lea State University of New York (SUNY) Oswego, Jens Palsberg University of California, Los Angeles (UCLA) | ||



