sinamny/sft_merged_model
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 23, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The sinamny/sft_merged_model is a 4 billion parameter Qwen3-based causal language model developed by sinamny, fine-tuned from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit. This model leverages Unsloth for accelerated training, offering a 32768-token context length. It is optimized for efficient performance, making it suitable for applications requiring a balance of capability and speed.

Loading preview...