Zheng-Zong/AronaR1-DS-7B-v3-epoch_4
The Zheng-Zong/AronaR1-DS-7B-v3-epoch_4 is a 7.6 billion parameter Qwen2-based causal language model, finetuned by Zheng-Zong from unsloth/DeepSeek-R1-Distill-Qwen-7B. This model was trained using Unsloth and Huggingface's TRL library, enabling faster finetuning. With a context length of 32768 tokens, it is designed for general language generation tasks.
Loading preview...
Overview
Zheng-Zong/AronaR1-DS-7B-v3-epoch_4 is a 7.6 billion parameter language model developed by Zheng-Zong. It is a finetuned version of the unsloth/DeepSeek-R1-Distill-Qwen-7B base model, leveraging the Qwen2 architecture.
Key Characteristics
- Base Model: Finetuned from
unsloth/DeepSeek-R1-Distill-Qwen-7B(Qwen2 architecture). - Parameter Count: 7.6 billion parameters.
- Context Length: Supports a context window of 32768 tokens.
- Training Method: Utilizes Unsloth and Huggingface's TRL library for finetuning, which facilitated a 2x faster training process.
- License: Distributed under the Apache-2.0 license.
Potential Use Cases
This model is suitable for a variety of general-purpose language generation and understanding tasks, benefiting from its Qwen2 foundation and optimized finetuning process.