akera/Sunflower-32B-GRPO
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

akera/Sunflower-32B-GRPO is a 32 billion parameter Qwen3-based causal language model developed by akera, fine-tuned from Sunbird/Sunflower-32B. This model was trained significantly faster using Unsloth and Huggingface's TRL library, making it efficient for deployment. Its primary use case is general language generation and understanding, benefiting from its large parameter count and optimized training process.

Loading preview...