Skip to content

apache/tvm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

TVM is licensed under the Apache-2.0 license.

Getting Started

Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.

History and Acknowledgement

TVM started as a research project for deep learning compiler. The first version of the project benefited a lot from following projects:

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.

Since then, the project has gone through several rounds of redesigns. The current design is also drastically different from the initial design, following the development trend of ML compiler community.

The most recent version focuses on a cross-level design with TensorIR as tensor-level representation and Relax as graph level representation, and python-first transformations. The current design goal of the project is to make the ML compiler accessible by enabling most transformations to be customizable in Python and bringing a cross-level representation that can jointly optimize computational graphs, tensor programs, and libraries. The project also serves as a foundation infra to build python-first vertical compilers for various domains, such as LLMs.