We’re excited to announce the new OctoAI Endpoint integration in LangChain, allowing you to easily create LangChain applications using large language models (LLMs) on OctoAI. The OctoAI Endpoint integration is available today in LangChain release 0.0.228, and can be used to build against any model running on OctoAI - including the latest LLMs like Falcon, MPT and Vicuna, as well as your custom or fine tuned models.
LangChain: now foundational to LLM-powered applications
LangChain has grown in adoption, community engagement, and maturity, since its release in October 2022, and today it is the defacto framework for building new LLM powered applications. At its core, LangChain connects LLMs to data and logic specific to the application. LLMs are trained on a broad but generic set of historical data, and through LangChain, applications can get rich, use case specific, responses to queries. LangChain also abstracts the specifics of the underlying LLM used, allowing developers to choose the optimal LLM for their use case as they create data query pipelines that effectively address specific needs.
Growing adoption of the LangChain + OSS LLM combination
Early momentum of LLM usage was centered around GPT, with LangChain being used to add context to responses using different types of data sources and vector embeddings. However, there has since been an explosion of innovation in open source software (OSS) LLMs, and powerful OSS models like MPT, Falcon, and Vicuna, are proving to be highly effective and even better for certain use cases. This growth in OSS LLMs has seen developers increasingly build on LangChain to combine new open source LLMs and targeted data connection enhancements, as an effective alternative to the GPT centric approach to building such applications.
New LangChain integration for models on OctoAI
OctoAI’s model acceleration brings you a library of the fastest and most affordable foundation AI models - including the latest OSS LLMs like LLaMA-65B, MPT, Falcon and Vicuna. We have now made it even easier for you to connect your LangChain applications to your choice of OSS LLMs (or your own fine tuned or custom models) on OctoAI by making the OctoAI Endpoint LLM integration for LangChain available natively in the upstream LangChain code.
With the OctoAI Endpoint integration, you can easily provision your desired LLM on OctoAI, connect it to your LangChain data sources, and build your natural language application. Detailed documentation and examples of using this integration is available in the LangChain documentation. You can also use the code samples in the OctoAI documentation, which include examples of building applications on new OSS LLMs including MPT and Falcon, to easily get started on OctoAI.
We believe that this ongoing expansion of options and capabilities in OSS LLMs will accelerate interest and activity around building on OSS LLMs. LangChain’s ability to abstract the LLM and to integrate in data and logic to power responses is acting as a catalyst to this momentum. We’re already seeing examples that validate this, with developers building on LangChain and OSS models like Falcon to create highly custom experiences for their applications. OctoAI continues to add to its library of the fast and efficient foundation models with the newest OSS LLMs as they are released, and the OctoAI Endpoint integration in LangChain makes it easy for developers to adopt and apply these models in their use cases.