The distil-labs/distil-qwen3-0.6b-text2sql is a 0.6 billion parameter Qwen3-based model developed by Distil Labs, fine-tuned for converting natural language questions into SQL queries. Utilizing knowledge distillation from DeepSeek-V3, this compact model achieves strong Text2SQL performance, reaching 74% LLM-as-a-Judge accuracy, a 2x improvement over its base model. With a 40,960 token context length, it is optimized for lightweight and fast local deployment for SQL generation tasks.
No reviews yet. Be the first to review!