kikansha-Tomasu/Qwen3-4B-Instruct-2507-sft
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 17, 2026License:apache-2.0Architecture:Transformer Open Weights Loading

kikansha-Tomasu/Qwen3-4B-Instruct-2507-sft is a 4 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit). This model is specifically optimized for improving structured output accuracy across formats like JSON, YAML, XML, TOML, and CSV. It features a context length of 32768 tokens and is designed for tasks requiring precise data formatting.

Loading preview...