Apache TVM compiler the open-source backbone of OctoML
Apache TVM is an open-source deep-learning compiler framework that empowers engineers to optimize and run computations efficiently, on any hardware backend.

Simplified deployment
Enables compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, and DarkNet into minimum deployable modules on diverse hardware backends.

Automatic optimization
Provides the infrastructure to automatically generate and optimize tensor operators on more backends with better performance.

Meet the fastest growing open-source ML community
Started as a research project at the SAMPL group of Paul G. Allen School of Computer Science & Engineering, University of Washington, TVM is now in incubation at The Apache Software Foundation (ASF). With 500+ contributors and 6,000+ members using the framework for optimizations — including Amazon, Facebook, Arm and Qualcomm, and more — it is quickly becoming the standard for deep learning compilation.

See who’s putting TVM to work










How it works
TVM enables two kinds of optimizations:
Computational graph optimization performs tasks such as high-level operator fusion, layout transformation, and memory management.
Tensor operator optimization and code generation layer that optimizes tensor operators.

It's easy to get involved

Learn from the tutorials
Compile PyTorch, TensorFlow, or CoreML models, and deploy pre-trained models on Android.

See the current TVM roadmap
Discover what's next with TVM — auto scheduling, quantization, auto tensorization, and more.

Join the discussion forums
Tap into the collective knowledge and expertise of over over 3,000 users (and counting!).
