Today, I’m incredibly excited to announce our latest milestone with customers, partners and community members. OctoML has raised $85 million led by Tiger Global Management, with participation from existing investors Addition, Madrona Venture Group and Amplify Partners. The new funding round brings OctoML’s total funding to date to $132 million.
I’m incredibly humbled by the progress our team has made in the two whirlwind years since we founded OctoML. From the company’s inception at the University of Washington, we’ve held a vision that AI/ML technology should be sustainable and accessible to everyone, everywhere. And less than a year ago, we launched our Machine Learning Deployment Platform to get one step closer to making this vision a reality. Today, OctoML provides some of the most prominent enterprises the choice, automation and performance they need to get their trained models into production as quickly and as cost-effectively as possible—and we couldn’t be prouder.
I’m grateful for our Global 1000 early access customers, without whom none of this would be possible. Each week our customers are in aggregate automatically completing hundreds of ML model accelerations, which is a radical change from the complex manual processes and error-prone tasks that would previously have taken months in their pre-existing ML deployment pipelines. Customers using our platform to provide a unified ML deployment lifecycle are seeing significant savings in their inference serving costs and accelerated time to market for the AI-based services. In several cases they are actualizing entirely new services at the edge.
One such customer is a Global 10 company that is relying on our OctoML platform as it builds out its strategies around its mobility business. They are leveraging OctoML to provide a unified deployment lifecycle across an array of edge hardware solutions and are using the performance benchmarking insights to make determinations around the right pairing of models and hardware for use cases as diverse as autonomous driving and smart cities. We are excited to have them speak at TVMcon in December with a number of other commercial customers. We also want to acknowledge and thank our customers as they provided background insights to help us be selected as a Cool Vendor in Enterprise AI Operationalization and Engineering by leading research and advisory company Gartner.
Our Growing Ecosystem
There has also been an equally vibrant parallel effort in partnering closely with leading hardware vendors, software infrastructure players and top cloud service providers to build out a robust ecosystem to ensure that our customers’ deployment needs are met. In fact over the past two months, we announced relationships with AMD, Arm and Qualcomm that cover a staggering range of cloud GPU and CPU servers all the way down to the smallest microcontrollers. Providing that hardware choice for our customers gives them the confidence that they will always have the best performing solutions for the intelligent applications and services they are creating.
One example of the performance work we’ve done is around Arm Cortex-A, which shows an average 2.18x speedup from TensorFlow Lite for a wide range of vision models. That performance lift is transformational for our customers; providing them the opportunity to deliver high performing and highly accurate AI experiences as close to their users as possible. We’ve also had the opportunity to collaborate with Microsoft Research to push the boundaries on what can be achieved with computer vision models running at scale; also with a 2x+ speedup.
The Road Ahead
Moving forward we wanted to share with you how we plan to leverage this investment. First and foremost, we will be nearly tripling our world-class engineering team over the next year. In addition to the innovations we’ve shared above, a significant part of this team works in and collaborates around open source software. Our founding team are the creators of Apache TVM—the open source ML stack for performance and portability—which is a community driven technology used to optimize and deploy ML models running in a wide range of applications from widely used smart speakers to large cloud-based AI services. Please take a look at our job postings if you are interested in being a part of this great team.
We will also be quadrupling the collective size of our sales, customer success, ecosystem partnership and marketing teams to allow us to meet the needs of our customers and rapidly growing partner ecosystem.
If you are interested in achieving the same benefits as our early access customers, please register here for a free trial.
OctoML joins the community effort to democratize MLPerf inference benchmarking
OctoML enters the MLPerf inference benchmark with the first two submissions accelerated via Apache TVM and automated with MLCommons' Collective Knowledge framework.
OctoML expands deployment choice with new multi-cloud and edge targets
The OctoML Machine Learning Deployment Platform now supports inferencing targets from the cloud to the edge, including hardware from leading industry vendors like AWS, GCP, NVIDIA, AMD, Arm, and Intel.