JuntaTakahashi/qwen3-4b-structured-sft-lora
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 10, 2026Architecture:Transformer Warm

JuntaTakahashi/qwen3-4b-structured-sft-lora is a 4 billion parameter Qwen3-based language model, fine-tuned using Supervised Fine-Tuning (SFT) with TRL. This model is specifically adapted from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit. It is designed for general text generation tasks, leveraging its 40960-token context length for processing extensive inputs.

Loading preview...