sanaeai/Qwen2.5-14B-Instruct-1M-rep-ce

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

sanaeai/Qwen2.5-14B-Instruct-1M-rep-ce is a 14.8 billion parameter instruction-tuned causal language model, finetuned from Qwen/Qwen2.5-14B-Instruct-1M. Developed by sanaeai, this model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its Qwen2.5 architecture and 32768 token context length.

Loading preview...

sanaeai/Qwen2.5-14B-Instruct-1M-rep-ce Overview

This model is an instruction-tuned variant of the Qwen2.5-14B-Instruct-1M base model, developed by sanaeai. It features 14.8 billion parameters and supports a context length of 32768 tokens. A key differentiator is its training methodology, which utilized Unsloth and Huggingface's TRL library, resulting in a 2x acceleration during the finetuning process.

Key Capabilities

  • Instruction Following: Optimized for understanding and executing a wide range of natural language instructions.
  • Efficient Training: Benefits from Unsloth's optimizations, indicating potential for faster deployment or further adaptation.
  • Qwen2.5 Architecture: Leverages the robust capabilities of the Qwen2.5 model family.

Good For

  • Applications requiring a capable instruction-tuned model with a substantial parameter count.
  • Developers interested in models trained with efficient finetuning techniques like Unsloth.
  • General-purpose natural language understanding and generation tasks.