TakaTaka3/Qwen3-4B-Instruct-2507-sft-merged_V2
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 8, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

TakaTaka3/Qwen3-4B-Instruct-2507-sft-merged_V2 is a 4 billion parameter Qwen3-based instruction-tuned language model, fine-tuned by TakaTaka3 using QLoRA. This model is specifically optimized to enhance structured output accuracy for formats like JSON, YAML, XML, TOML, and CSV. It leverages a 32K context length and is designed for tasks requiring precise data formatting.

Loading preview...