Jackrong/DASD-4B-Thinking-2507-GRPO-v2
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 10, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
Jackrong/DASD-4B-Thinking-2507-GRPO-v2 is a 4 billion parameter Qwen3-based causal language model developed by Jackrong. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language tasks, leveraging its Qwen3 architecture for efficient processing.
Loading preview...