Declarative Machine Learning with Einsummable

Date
2024-07-23
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Modern tensor-based machine learning (ML) systems such as PyTorch and TensorFlow have high performance but significant limitations for large-scale ML. These systems require a programmer to manually decompose ML computations so that they can run on multiple machines. Not only is this challenging for end-users, but moving from one hardware setup to the next requires writing a lot of code.

We introduce a new end-to-end ML system called Einsummble'' that automatically decomposes computations to match the available hardware. Unlike existing systems, we are guided by one fundamental design principle: at all costs, the user may only say what they want to compute, not how it is to be computed. Instead of painstakingly building a model parallel'' or ``data parallel'' implementation, a user of Einsummable needs only build their computation in our Einsummable language.

To make Einsummable a reality, we designed an Einsummable language for users to interact with, to create what we call EinGraphs. The Einsummable language is built on the extended Einstein summation notation, familiar to many ML practitioners. Our language is expressive enough to represent state of the art generative ML models, including Llama. In addition, we support automatic differentiation.

On the other end of the abstraction spectrum, we created a compute graph specification for machines to execute, called TaskGraphs. TaskGraphs are designed to be executed by distributed, asynchronous compute engines. For our experiments, we built a distributed CPU execution engine, scaling to 32 machines, each with 64 processors. Even though we targeted CPU clusters, the TaskGraph abstraction is also suitable for clusters of GPUs.

Most importantly, given hardware parameters, we compile EinGraphs into \ TaskGraphs without user intervention. The discovered TaskGraph solution may very well include the common model or data parallel solutions. Our main algorithm for this is called EinDecomp, which decomposes EinGraphs so that the computation exposes enough parallelism to keep all processors busy without also introducing undue communication burden.

Description
Degree
Doctor of Philosophy
Type
Thesis
Keywords
Machine Learning, Deep Learning, Automatic Parallelism, Distributed Computing
Citation

Bourgeois, Daniel Christopher. Declarative Machine Learning with Einsummable. (2024). PhD diss., Rice University. https://hdl.handle.net/1911/117775

Has part(s)
Forms part of
Published Version
Rights
Copyright is held by the author, unless otherwise indicated. Permission to reuse, publish, or reproduce the work beyond the bounds of fair use or other exemptions to copyright law must be obtained from the copyright holder.
Link to license
Citable link to this page