Just-Bax/Qwen3-14B-Base-Uzbek-Cyrillic

TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Nov 1, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Just-Bax/Qwen3-14B-Base-Uzbek-Cyrillic is a 14.8 billion parameter Transformer Decoder model, fine-tuned from Qwen/Qwen3-14B-Base using LoRA. Developed by Just-Bax, it is specifically optimized for natural language generation, chat, and summarization in Uzbek (Cyrillic script), while retaining the base model's broader multilingual capabilities. This model is designed for applications requiring grammatically coherent text in Uzbek Cyrillic.

Loading preview...

Model Overview

Just-Bax/Qwen3-14B-Base-Uzbek-Cyrillic is a 14.8 billion parameter causal language model, fine-tuned from the Qwen/Qwen3-14B-Base architecture. This model leverages a Transformer Decoder architecture and was fine-tuned using LoRA (Low-Rank Adaptation) with Unsloth, a training framework designed for efficient fine-tuning. It processes inputs with bfloat16 precision and supports a substantial context length of 32,768 tokens.

Key Capabilities

  • Uzbek Cyrillic Text Generation: Excels at producing natural and grammatically correct text in the Uzbek Cyrillic script.
  • Multilingual Support: Preserves the inherent multilingual capabilities of its base Qwen3-14B model, making it suitable for broader language applications.
  • Efficient Fine-tuning: Utilizes LoRA with specific parameters (r=16, α=32, dropout=0.0) for targeted adaptation.

Intended Use Cases

  • Content Generation: Ideal for creating various forms of text content in Uzbek Cyrillic.
  • Conversational AI: Suitable for developing chatbots and dialogue systems in Uzbek.
  • Text Summarization: Can be used for summarizing documents and articles in Uzbek Cyrillic.
  • Central Asian Language Applications: Applicable in multilingual setups that involve Central Asian languages, particularly Uzbek.