The open-source backbone of OctoML

Apache TVM is an open-source deep-learning compiler framework that empowers engineers to optimize and run computations efficiently, on any hardware backend.

The open-source backbone of OctoML

Simplified deployment

Enables compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, and DarkNet into minimum deployable modules on diverse hardware backends

Simplified deployment

Automatic optimization

Provides the infrastructure to automatically generate and optimize tensor operators on more backend with better performance.

Automatic optimization

why OctoML?

Meet the fastest growing open-source ML community.

Started as a research project at the SAMPL group of Paul G. Allen School of Computer Science & Engineering, University of Washington, TVM is now in incubation at The Apache Software Foundation (ASF). With 500+ contributors and 6,000+ members using the framework for optimizations — including Amazon, Facebook, Arm and Qualcomm, and more — it is quickly becoming the standard for deep learning compilation.

Paul G Allen School
Apache Software Foundation

See who’s putting TVM to work

SiMa AI
Microsoft
AMD
Arm
Huawei
Qualcomm
Cornell University
AWS
Alibaba Cloud
Facebook
SiMa AI
Microsoft
AMD
Arm
Huawei
Qualcomm
Cornell University
AWS
Alibaba Cloud
Facebook
SiMa AI
Microsoft
AMD
Arm
Huawei
Qualcomm
Cornell University
AWS
Alibaba Cloud
Facebook
AWS
Alibaba Cloud
Facebook
Intel
Nvidia
EDGE Cortix
UCLA
AMD
Microsoft
AWS
Alibaba Cloud
Facebook
Intel
Nvidia
EDGE Cortix
UCLA
AMD
Microsoft
AWS
Alibaba Cloud
Facebook
Intel
Nvidia
EDGE Cortix
UCLA
AMD
Microsoft

How it works

TVM enables two kinds of optimizations:

  • Computational graph optimization performs tasks such as high-level operator fusion, layout transformation, and memory management.
  • Tensor operator optimization and code generation layer that optimizes tensor operators.
How it works

TVM Community

It's easy to get involved

Maximize performance. Simplify deployment.

Ready to get started?