Aider

Powering your in-terminal coding assistance with inference from featherless

Aider - Coding assistant

To make it work you need to add an .aider.model.settings.yml and pass it from the command line with --model-settings-file[1].

This is an example one using Qwen2.5-72B-Instruct:

.aider.model.settings.yml
- accepts_images: false
  cache_control: false
  caches_by_default: false
  edit_format: whole
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 4096
  lazy: false
  name: openai/Qwen/Qwen2.5-72B-Instruct
  reminder: user
  send_undo_reply: false
  streaming: true
  use_repo_map: true
  use_system_prompt: true
  use_temperature: true
.aider.model.metadata.json
{
    "openai/Qwen/Qwen2.5-72B-Instruct": {
        "max_tokens": 4096,
        "max_input_tokens": 4096,
        "max_output_tokens": 4096,
        "input_cost_per_token": 0,
        "output_cost_per_token": 0,
        "litellm_provider": "openai",
        "mode": "chat",
        "support_vision": false,
        "support_function_calling": false
    }
}

you need one for each model you want to use, then calling it with --model 'openai/Qwen/Qwen2.5-72B-Instruct'.

aider --openai-api-base 'https://api.featherless.ai/v1' --openai-api-key your_featherless_API_key --model 'openai/Qwen/Qwen2.5-72B-Instruct' --map-tokens 1024 --model-metadata-file '/path/to/.aider.model.metadata.json' --model-settings-file '/path/to/.aider.model.settings.yml'

Just as an addendum you can use .env and a .aider.config.yml to make it easier for recurring calls.

[1] That's not the only way to use it, more on it here.

[2] More info on the options here