TReV-89/sunflower-14b-grpo-factuality_v11
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 13, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
TReV-89/sunflower-14b-grpo-factuality_v11 is a 14 billion parameter Qwen3 model developed by TReV-89, fine-tuned from jq/sunflower-14b-bs64-lr1e-4-250919. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x faster training speed. With a context length of 32768 tokens, it is designed for applications requiring efficient and factual language generation.
Loading preview...
Model Overview
TReV-89/sunflower-14b-grpo-factuality_v11 is a 14 billion parameter Qwen3-based language model, developed by TReV-89. It was fine-tuned from the jq/sunflower-14b-bs64-lr1e-4-250919 model.
Key Characteristics
- Architecture: Based on the Qwen3 model family.
- Parameter Count: 14 billion parameters.
- Context Length: Supports a substantial context window of 32768 tokens.
- Training Efficiency: Training was accelerated by 2x using the Unsloth library in conjunction with Huggingface's TRL library.
Potential Use Cases
- Factuality-focused applications: Given its name, it is likely optimized for tasks requiring high factual accuracy.
- Long-context understanding: The 32768 token context length makes it suitable for processing and generating longer texts.
- Efficient deployment: Models trained with Unsloth often benefit from optimized performance, which can be advantageous for deployment.