taharmasmaliyev07/Qwen-3-8B-b16-tuned-full-v2
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The taharmasmaliyev07/Qwen-3-8B-b16-tuned-full-v2 is an 8 billion parameter Qwen3-based causal language model, fine-tuned by taharmasmaliyev07. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general language tasks, leveraging its efficient training methodology.

Loading preview...