PetarKal/Qwen3-4B-ascii-art-curated-mix-v5-full-lr2e-5-ga16-ctx4096

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-ascii-art-curated-mix-v5-full-lr2e-5-ga16-ctx4096 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B. This model specializes in generating ASCII art, leveraging a curated mix of data. It was trained using the TRL framework and features a context length of 32768 tokens, making it suitable for creative text generation tasks involving ASCII art.

Loading preview...

Model Overview

This model, PetarKal/Qwen3-4B-ascii-art-curated-mix-v5-full-lr2e-5-ga16-ctx4096, is a specialized fine-tuned version of the Qwen3-4B architecture, developed by PetarKal. It has been specifically trained using the TRL (Transformers Reinforcement Learning) framework to excel in generating ASCII art.

Key Capabilities

  • ASCII Art Generation: The primary focus of this model is to produce creative and structured ASCII art based on given prompts.
  • Qwen3-4B Base: Benefits from the foundational capabilities of the Qwen3-4B model, providing a strong language understanding base.
  • Extended Context Window: Features a substantial context length of 32768 tokens, allowing for more complex and detailed input prompts for ASCII art generation.

Training Details

The model underwent SFT (Supervised Fine-Tuning). It leverages specific framework versions including TRL 0.29.1, Transformers 5.3.0, Pytorch 2.10.0, Datasets 4.8.3, and Tokenizers 0.22.2. The training process can be further explored via its Weights & Biases run.

When to Use This Model

This model is ideal for applications requiring creative text-to-ASCII art conversion or generation. Its specialization makes it a strong candidate for artistic projects, terminal-based visualizations, or any scenario where ASCII art output is desired over standard text.