taharmasmaliyev07/Qwen-3-8B-b16-tuned-full
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The taharmasmaliyev07/Qwen-3-8B-b16-tuned-full is an 8 billion parameter Qwen3 model developed by taharmasmaliyev07. This model was fine-tuned using Unsloth and Huggingface's TRL library, resulting in a 2x faster training process. It is designed for general language tasks, leveraging its Qwen3 architecture and efficient training methodology.
Loading preview...