PetarKal/Qwen3-4B-Base-ascii-art-v6-phase2c-generation-lr3e6

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 7, 2026Architecture:Transformer Cold

PetarKal/Qwen3-4B-Base-ascii-art-v6-phase2c-generation-lr3e6 is a 4 billion parameter language model, fine-tuned from PetarKal/Qwen3-4B-Base-ascii-art-v6-phase1-understanding. This model was trained using the TRL framework with SFT, building upon its predecessor's capabilities. It is designed for text generation tasks, leveraging a 32768 token context length.

Loading preview...

Model Overview

This model, PetarKal/Qwen3-4B-Base-ascii-art-v6-phase2c-generation-lr3e6, is a 4 billion parameter language model. It represents a fine-tuned iteration of the PetarKal/Qwen3-4B-Base-ascii-art-v6-phase1-understanding model, indicating a progression in its training and specialization.

Key Capabilities

  • Text Generation: The model is primarily designed for generating text, as suggested by its phase2c-generation designation.
  • Fine-tuned with TRL: Training was conducted using the TRL (Transformers Reinforcement Learning) framework, specifically employing Supervised Fine-Tuning (SFT).
  • Extended Context Window: It supports a substantial context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Training Details

The model's training procedure involved SFT, building upon a previous phase of development. The process utilized specific versions of key frameworks:

  • TRL: 0.29.1
  • Transformers: 5.5.0
  • Pytorch: 2.10.0
  • Datasets: 4.8.4
  • Tokenizers: 0.22.2

Good For

  • Developers looking for a Qwen3-based model fine-tuned for generation tasks.
  • Experimentation with models trained using the TRL framework and SFT methods.