g4me/QwenRolina3-Base-LR1e5-b32g2gc8-order-ppl
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Gated Cold

The g4me/QwenRolina3-Base-LR1e5-b32g2gc8-order-ppl model is a 2 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B-Base. It was trained using the TRL framework with a context length of 32768 tokens. This model is designed for general text generation tasks, building upon the Qwen3 architecture.

Loading preview...