LiteLLM
LiteLLM gives you a unified interface to access multiple LLMs, track LLM usage, setup guardrails, custom logging and more.
Installation
First, make sure you have the litellm
library installed, if not you can install it by running:
pip install litellm
Basic Completion Example
This example shows you how to make a simple completion request to a Featherless AI model.
Completion Example
from litellm import completion
response = completion(
model="featherless_ai/featherless-ai/Qwerky-72B", # Example model. [1]
api_key="YOUR FEATHERLESS API KEY",
messages=[{"role": "user", "content": "Write a short poem about AI."}]
)
print(response.choices[0].message.content)
Streaming Example
LiteLLM also supports streaming responses
Streaming Example
from litellm import completion
response_stream = completion(
model="featherless_ai/featherless-ai/Qwerky-72B", # Example model. [1]
api_key="YOUR FEATHERLESS API KEY",
messages=[{"role": "user", "content": "Tell me a fun fact about space"}],
stream=True
)
print("Streaming response:")
for chunk in response_stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="") # Print content of the chunk
print("\n\nStreaming complete.")
Resources
LiteLLM Featherless docs
The LiteLLM Featherless documentation
LiteLLM Featherless Cookbook
A notebook to guide you on how to setup LiteLLM with Featherless
Model Catalog
Our catalog of models
Last edited: Jun 16, 2025