distil-labs/distil-home-assistant-qwen3

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Feb 14, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Distil-Home-Assistant-Qwen3 is a 0.6 billion parameter Qwen3-based model developed by Distil Labs, fine-tuned for multi-turn intent classification and slot extraction in smart home environments. This model excels at on-device tool calling, achieving 96.7% accuracy, surpassing its 120B parameter teacher model. With a context length of 40,960 tokens, it is optimized for private, low-latency smart home control and edge deployment.

Loading preview...

Overview

Distil-Home-Assistant-Qwen3 is a specialized 0.6 billion parameter model built on the Qwen3 architecture by Distil Labs. It is specifically fine-tuned for multi-turn intent classification and slot extraction within smart home control systems. A key innovation is its training via knowledge distillation from a 120B parameter teacher model, enabling it to achieve superior performance in a significantly smaller footprint.

Key Capabilities & Performance

  • Exceptional Tool Call Accuracy: Achieves 96.7% tool call accuracy, notably exceeding its 120B teacher model (94.1%) while being 200 times smaller.
  • On-Device Operation: Designed for local execution, ensuring privacy and low-latency responses for smart home commands.
  • Multi-turn Conversation Handling: Capable of maintaining context across conversation turns to resolve pronouns and sequential commands.
  • Structured Tool Calling: Outputs structured JSON tool calls for 6 specific smart home functions, including toggle_lights, set_thermostat, lock_door, get_device_status, set_scene, and intent_unclear.
  • Efficient Size: At 0.6B parameters, it offers a highly efficient solution for edge deployment.

Training Methodology

The model was trained using the Distil Labs platform, starting with 50 hand-written multi-turn smart home conversations. This seed data was then synthetically expanded to thousands of examples using a 120B teacher model, followed by multi-turn tool calling distillation on the Qwen3-0.6B base model.

Limitations

  • Trained exclusively on English smart home intents.
  • Covers only 6 specific smart home functions; not a general-purpose tool caller.
  • A small fraction of function calls (3.3%) may be incorrect.
  • Fixed temperature range for thermostat control (60-80°F).

Ideal Use Cases

  • On-device smart home controllers prioritizing privacy and local processing.
  • Text-based smart home chatbots requiring structured intent routing.
  • Edge deployment for local smart home hubs.
  • Any multi-turn tool calling task with a bounded intent taxonomy.