Zheng-Zong/AronaR1-DS-7B-v2-epoch_1

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

AronaR1-DS-7B-v2-epoch_1 is a 7.6 billion parameter Qwen2 model developed by Zheng-Zong, fine-tuned from unsloth/DeepSeek-R1-Distill-Qwen-7B. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It features a 32768 token context length and is designed for general language tasks.

Loading preview...

Overview

AronaR1-DS-7B-v2-epoch_1 is a 7.6 billion parameter language model developed by Zheng-Zong. It is a Qwen2-based model, specifically fine-tuned from the unsloth/DeepSeek-R1-Distill-Qwen-7B base model. This iteration, v2-epoch_1, was trained with a focus on efficiency, leveraging the Unsloth library in conjunction with Huggingface's TRL library, which facilitated a 2x faster training process.

Key Characteristics

  • Model Family: Qwen2
  • Parameters: 7.6 billion
  • Context Length: 32768 tokens
  • Training Efficiency: Utilizes Unsloth for accelerated fine-tuning.
  • License: Apache-2.0

Use Cases

This model is suitable for a variety of general-purpose language generation and understanding tasks, benefiting from its efficient training and substantial parameter count. Its large context window makes it capable of processing and generating longer texts.