lhkhiem28/Qwen2.5-1.5B-GRPO-evo-0
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 27, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The lhkhiem28/Qwen2.5-1.5B-GRPO-evo-0 model is a 1.54 billion parameter causal language model from the Qwen2.5 series, developed by Qwen. It features a transformer architecture with RoPE, SwiGLU, and RMSNorm, supporting a 32,768 token context length. This base model is designed for pretraining and serves as a foundation for further fine-tuning, excelling in knowledge, coding, mathematics, and long-text generation.

Loading preview...