FAQ
Answers to commonly asked questions about Featherless
What is Featherless?
Featherless is an LLM hosting provider that offers our subscribers access to a continually expanding library of HuggingFace models.
Featherless: Less hassle, less effort. Start now.
Why should I use Featherless?
Featherless is a serverless provider with unique model loading and GPU orchestration abilities that allows us to keep an exceptionally large catalog of models online.
Other providers either offer low cost of access (e.g. openrouter, AWS bedrock) but with a limited set of models, or an unlimited range of models (e.g. runpod) but with users managing servers and the associated costs of operation (e.g. > $2/hour for sufficient GPUs to run a 70B model).
Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing.
Provider | Cost | Speed | Choice |
---|---|---|---|
runpod | ❌ | ✅ | ✅ (thousands) |
hugging face inference | ❌ | ✅ | ✅ (thousands) |
anthropic | ✅ | ✅ | ❌ (<10 models) |
openrouter | ✅ | ✅ | ❌ (~200 models) |
Featherless | ✅ | ✅ | ✅ (thousands) |
How do I get started with Featherless?
To get started, create an account and subscribe to one of our plans. Our plans are subscription and concurrency based, alllowing unlimited monthly requests with a fixed number of concurrent requests. A paid subscription is able to access all models up to a given size. Visit our Plans page for more information.
Do you log my chat history?
No, we do not log any of the prompts or completions sent to our API. You can find more information in our privacy policy.
Which model architectures are supported?
Our goal is to provide serverless inference for all open-weight models on Hugging Face. We currently support a wide range of llama models including Llama 2 and 3, Mistral, Qwen and Deep Seek. For more details see https://featherless.ai/docs/model-compatibility.
How do I get models added?
Business customers can deploy models through their dashboard. Users on individual plans can request on Discord!
How can I contact support?
The best way to reach us is to join our Discord!