Azimjon2313/my-qwen3-14b-finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Azimjon2313/my-qwen3-14b-finetuned is a 14 billion parameter Qwen3-based causal language model developed by Azimjon2313. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training compared to standard methods. It is designed for general language understanding and generation tasks, leveraging its efficient training to provide a capable foundation for various applications.

Loading preview...