kairawal/Qwen3-14B-HI-SynthDolly-1A
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

kairawal/Qwen3-14B-HI-SynthDolly-1A is a 14 billion parameter causal language model developed by kairawal, fine-tuned from unsloth/Qwen3-14B. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for general language generation tasks, leveraging its efficient fine-tuning process to provide a capable model for various applications.

Loading preview...

Overview

kairawal/Qwen3-14B-HI-SynthDolly-1A is a 14 billion parameter language model, fine-tuned by kairawal from the base model unsloth/Qwen3-14B. This model leverages the Unsloth library, which enabled a 2x faster training process compared to standard methods, in conjunction with Huggingface's TRL library.

Key Capabilities

  • Efficiently Fine-tuned: Benefits from Unsloth's optimizations for faster training.
  • General Language Generation: Suitable for a broad range of text generation tasks.
  • Qwen3 Architecture: Built upon the robust Qwen3 model family.

Good for

  • Developers seeking a 14B parameter model with an efficient fine-tuning history.
  • Applications requiring a capable language model for text generation and understanding.
  • Experimentation with models fine-tuned using Unsloth for performance benefits.