qingy2024/GRMR-V2.5-1.7B
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Jun 2, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The GRMR-V2.5-1.7B model by qingy2024 is a 1.7 billion parameter Qwen3-based causal language model, fine-tuned from unsloth/Qwen3-1.7B-Base. It was trained using Unsloth and Huggingface's TRL library, emphasizing efficient training. This model is designed for general language generation tasks, leveraging its Qwen3 architecture for robust performance.

Loading preview...