RJTPP/scot0402s-deepseek-14b-REF-full

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

RJTPP/scot0402s-deepseek-14b-REF-full is a 14.8 billion parameter Qwen2 model developed by RJTPP, fine-tuned from unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed. It is designed for general language generation tasks, leveraging its Qwen2 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

RJTPP/scot0402s-deepseek-14b-REF-full is a 14.8 billion parameter language model developed by RJTPP. It is a Qwen2-based model, fine-tuned from the unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit base model. This model was specifically trained using the Unsloth library in conjunction with Huggingface's TRL library, which enabled a 2x faster fine-tuning process.

Key Characteristics

  • Architecture: Based on the Qwen2 model family.
  • Parameter Count: 14.8 billion parameters.
  • Context Length: Supports a context length of 32768 tokens.
  • Training Efficiency: Leverages Unsloth for significantly faster fine-tuning.

Intended Use Cases

This model is suitable for a variety of natural language processing tasks, particularly those benefiting from a Qwen2-based architecture and efficient training. Its 14.8 billion parameters and substantial context window make it a capable option for applications requiring robust language understanding and generation.