ChuGyouk/R11
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026Architecture:Transformer Cold

ChuGyouk/R11 is an 8 billion parameter causal language model, fine-tuned from ChuGyouk/Qwen3-8B-Base. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework, making it suitable for general text generation tasks. It supports a context length of 32768 tokens, providing extensive capacity for processing longer inputs and generating coherent, extended responses.

Loading preview...

ChuGyouk/R11: An 8B Parameter Fine-Tuned Language Model

ChuGyouk/R11 is an 8 billion parameter causal language model, developed by ChuGyouk. It is a fine-tuned iteration of the ChuGyouk/Qwen3-8B-Base model, leveraging the TRL (Transformer Reinforcement Learning) framework for its training process.

Key Capabilities

  • General Text Generation: Optimized for a wide array of text generation tasks through Supervised Fine-Tuning (SFT).
  • Extended Context Window: Features a substantial context length of 32768 tokens, enabling the model to handle and generate longer, more complex textual sequences.
  • Foundation Model: Built upon the Qwen3-8B-Base architecture, providing a robust foundation for various NLP applications.

Training Details

The model underwent Supervised Fine-Tuning (SFT) using the TRL framework. The training environment utilized specific versions of key libraries:

  • TRL: 0.24.0
  • Transformers: 5.2.0
  • Pytorch: 2.10.0
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

This model is well-suited for developers looking for a capable 8B parameter model for general-purpose text generation, especially when long context understanding is beneficial.