taharmasmaliyev07/Qwen-3-4B-b16-tuned-full
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 30, 2026License:apache-2.0Architecture:Transformer Open Weights Loading

The taharmasmaliyev07/Qwen-3-4B-b16-tuned-full is a 4 billion parameter Qwen3 model, developed by taharmasmaliyev07. This model was finetuned from unsloth/Qwen3-4B and optimized for training speed using Unsloth and Huggingface's TRL library. It offers a 32768 token context length, making it suitable for applications requiring efficient processing of longer sequences.

Loading preview...