Using Featherless.ai Models with OpenClaw
A step-by-step guide to using Featherless.ai models in OpenClaw via OpenAI-compatible configuration

What is OpenClaw?
OpenClaw is an open-source, self-hosted AI agent gateway that's been gaining traction in developer communities. It connects chat apps like WhatsApp, Telegram, Discord, and iMessage to AI coding agents, letting you message your AI assistant from anywhere.
Unlike cloud-based solutions, OpenClaw runs on your own hardware. You install it once, and it becomes a bridge between your messaging apps and AI agents that can actually execute code, manage files, and interact with your system.
The power comes with risk. OpenClaw agents can run bash commands, modify your filesystem, and perform actions on your behalf.
⚠️ CRITICAL: Only run OpenClaw on a VM, VPS, or Docker container. NEVER on your main development machine or production environment.
Use a disposable Ubuntu VPS, dedicated server, or local VM. If something goes wrong, you want to be able to nuke it without losing your actual work.
What You Need
A VM or VPS (Ubuntu 24 recommended)
Featherless.ai API key
5 minutes
This guide uses Kimi (moonshotai/Kimi-K2.5) as an example model to keep the setup concrete and easy to follow.
You do not need to use Kimi. Choose models based on your Featherless subscription and the models enabled for your account. Any Featherless-supported model with an OpenAI-compatible API can be configured in the same way.
The Problem
OpenClaw's onboarding wizard does not support Featherless.ai models directly. In other words, it lacks an OpenAI-compatible configuration for Featherless during the setup wizard. You need to manually edit the config to use Featherless models.
Install OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bashiwr -useb https://openclaw.ai/install.ps1 | iexopenclaw --versionQuick Setup
1. Run the onboarding wizard
openclaw onboardsk-1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghFinish the setup. This creates ~/.openclaw/openclaw.json.
2. Understand OpenClaw's model naming
OpenClaw uses this format: provider/namespace/model-name
Examples:
featherless/moonshotai/Kimi-K2.5✓featherless/zai-org/GLM-4.7✓kimi-k2.5✗ (missing provider/namespace)moonshotai/Kimi-K2.5✗ (missing provider)
3. Edit the config file
{
"messages": {
"ackReactionScope": "group-mentions"
},
"agents": {
"defaults": {
"maxConcurrent": 1,
"subagents": {
"maxConcurrent": 1
},
"compaction": {
"mode": "safeguard"
},
"workspace": "/root/.openclaw/workspace",
"model": {
"primary": "featherless/moonshotai/Kimi-K2.5"
}
}
},
"models": {
"providers": {
"featherless": {
"baseUrl": "https://api.featherless.ai/v1",
"apiKey": "YOUR_FEATHERLESS_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "moonshotai/Kimi-K2.5",
"name": "Kimi K2.5",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 256000,
"maxTokens": 4096
}
]
}
}
},
"gateway": {
"mode": "local",
"auth": {
"mode": "token"
},
"port": 18789,
"bind": "loopback"
}
}Replace YOUR_FEATHERLESS_API_KEY with your actual key.
Important fields:
model.primary: Full path with provider prefixmodels.providers.featherless.api: Must be"openai-completions"models[0].id: Exact model name Featherless expects
4. Restart OpenClaw
openclaw gateway restartIf that doesn't work:
pkill -9 -f openclaw
openclaw gateway start5. Test it
openclaw agent --message "Hello" --agent main --localYou should get a response from Kimi.
Common Issues
Error: "Unknown model"
Error: Unknown model: moonshotai/Kimi-K2.5Fix: Add the provider prefix to model.primary:
"primary": "featherless/moonshotai/Kimi-K2.5"Error: "404 The model does not exist"
404 The model kimi-k2.5 does not exist
Fix: Use the exact model name from Featherless:
"id": "moonshotai/Kimi-K2.5"Error: "Concurrency limit exceeded"
Fix: Reduce concurrent requests:
"maxConcurrent": 1,
"subagents": {
"maxConcurrent": 1
}Or use a lighter model (GLM-4.7 or Llama 3.1 8B instead of DeepSeek-R1).
Error: "Session file locked"
Fix:
pkill -9 -f openclaw
rm -f ~/.openclaw/agents/main/sessions/*.lock
openclaw gateway startOther Featherless Models
{
"model": {
"primary": "featherless/zai-org/GLM-4.7"
},
"models": {
"providers": {
"featherless": {
"models": [
{
"id": "zai-org/GLM-4.7",
"name": "GLM 4.7",
"contextWindow": 128000,
"maxTokens": 4096
}
]
}
}
}
}Testing
Test API directly
curl -X POST https://api.featherless.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "moonshotai/Kimi-K2.5",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 100
}'Watch logs
openclaw logs --followLook for:
embedded run start: provider=featherless model=moonshotai/Kimi-K2.5✓embedded run done: aborted=false✓Unknown modelerrors ✗404errors ✗
Remote Access
SSH port forward to access the dashboard:
ssh -L 18789:localhost:18789 root@YOUR_SERVER_IP
Open: http://localhost:18789/?token=YOUR_GATEWAY_TOKEN
Get your token:
cat ~/.openclaw/openclaw.json | grep -A 3 '"auth"'
Tips
Start with GLM-4.7 or Llama 3.1 8B (lighter models)
Keep
maxConcurrent: 1if you have limited quotaBack up config before changes:
cp ~/.openclaw/openclaw.json ~/.openclaw/openclaw.json.backupWatch logs when testing:
openclaw logs --follow