Finch: Sparse and Structured Tensor Programming with Control Flow
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear integer grids. Real world tensors often contain underlying structure, such as sparsity, runs of repeated values, or symmetry. Support for structured data is fragmented and incomplete. Existing frameworks limit the tensor structures and program control flow they support to better simplify the problem.
In this work, we propose a new programming language, Finch, which supports \textit{both} flexible control flow and diverse data structures. Finch facilitates a programming model which resolves the challenges of computing over structured tensors by combining control flow and data structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of structure, such as structural zeros, repeated values, or clustered non-zeros. We show that this leads to dramatic speedups in operations such as SpMV and SpGEMM, image processing, and graph analytics.
Thu 16 OctDisplayed time zone: Perth change
16:00 - 17:30 | Neural NetworkOOPSLA at Orchid West Chair(s): Jiasi Shen The Hong Kong University of Science and Technology | ||
16:00 15mTalk | Convex Hull Approximation for Activation Functions OOPSLA Zhongkui Ma The University of Queensland, Zihan Wang The University of Queensland and CSIRO's Data61, Guangdong Bai University of Queensland | ||
16:15 15mTalk | Cost of Soundness in Mixed-Precision Tuning OOPSLA Pre-print | ||
16:30 15mTalk | Finch: Sparse and Structured Tensor Programming with Control Flow OOPSLA Willow Ahrens Massachusetts Institute of Technology, Teodoro F. Collin MIT CSAIL, Radha Patel MIT CSAIL, Kyle Deeds University of Washington, Changwan Hong Massachusetts Institute of Technology, Saman Amarasinghe Massachusetts Institute of Technology | ||
16:45 15mTalk | MetaKernel: Enabling Efficient Encrypted Neural Network Inference Through Unified MVM and Convolution OOPSLA Peng Yuan Ant Group, Yan Liu Ant Group, Jianxin Lai Ant Group, Long Li Ant Group, Tianxiang Sui Ant Group, Linjie Xiao Ant Group, Xiaojing Zhang Ant Group, Qing Zhu Ant Group, Jingling Xue University of New South Wales | ||
17:00 15mTalk | Quantization with Guaranteed Floating-Point Neural Network Classifications OOPSLA | ||
17:15 15mTalk | The Continuous Tensor Abstraction: Where Indices are Real OOPSLA Jaeyeon Won MIT, Willow Ahrens Massachusetts Institute of Technology, Teodoro F. Collin MIT CSAIL, Joel S Emer MIT/NVIDIA, Saman Amarasinghe Massachusetts Institute of Technology | ||