hnda/qwen3-4b-alf-sft-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 15, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The hnda/qwen3-4b-alf-sft-merged model is a 4 billion parameter Qwen3-based language model developed by hnda, fine-tuned from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit. This model was trained significantly faster using Unsloth and Huggingface's TRL library, offering a 40960 token context length. It is optimized for efficient performance due to its accelerated training methodology, making it suitable for applications requiring a capable yet resource-efficient Qwen3 variant.

Loading preview...