Zheng-Zong/AronaR1-SFT-stage1-v2-checkpoint250
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

AronaR1-SFT-stage1-v2-checkpoint250 is a 7.6 billion parameter Qwen2-based causal language model developed by Zheng-Zong. This model was finetuned from Zheng-Zong/AronaR1-SFT-stage1 using Unsloth and Huggingface's TRL library, emphasizing efficient training. With a 32K context length, it is optimized for tasks requiring processing of longer sequences.

Loading preview...