liteLLM
One library to standardize all LLM APIs
Simplify using OpenAI, Azure, Cohere, Anthropic, Replicate, Google LLM APIs. TLDR Call all LLM APIs using the chatGPT format - completion(model, messages) Consistent outputs and exceptions for all LLM APIs Logging and Error Tracking for all models
Reviews for liteLLM
Hear what real users highlight about this tool.
Big fan of liteLLM: one API for OpenAI/Anthropic/Groq/etc. Makes multi-model stacks painless
Great library for unifying LLM access across providers, dev/test friendly!
A must for every other project. Makes it 10x easier to switch models
Drop-in OpenAI compatibility and native provider mode let us swap LLMs or try new models without refactoring, so we can optimize for performance or cost on the fly.
For users prioritizing consistent and reliable performance, especially in production environments and for critical applications, AIBoox's focus on model performance is a key differentiator. liteLLM's strength lies in flexibility, while AIBoox emphasizes dependability.
Skeet wouldn't be possible without LiteLLM!
LiteLLM unifies all our LLM calls across providers and provides us with valuable metrics and automatic failovers, simplifying development and usage tracking. The dev team is super responsive and we're all-in!
Used as an LLM proxy, it allows the caching and load balancing between multiple AI services (Groq, OpenRouter, etc.) and even local with Ollama. It uses an OpenAI-compatible API that allows (when we can set the base URL) to use it in many apps or services. I use it configured with Langfuse which provides the performance analysis (monitoring) of each prompt/session.
liteLLM has been a huge unlock to allow us to experiment with different models and to automate some of the most tedious things to set up, like caching and rate-limit handling.