PetarKal/Qwen3-4B-Base-ascii-art-v5-no140k-e3-lr5e-5-ga16-ctx4096

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-Base-ascii-art-v5-no140k-e3-lr5e-5-ga16-ctx4096 is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B-Base. Developed by PetarKal, this model is specifically optimized for generating ASCII art, leveraging a context length of 32768 tokens. It was trained using the TRL framework to enhance its specialized output capabilities. This model is best suited for applications requiring creative text-to-ASCII art generation.

Loading preview...

Model Overview

This model, PetarKal/Qwen3-4B-Base-ascii-art-v5-no140k-e3-lr5e-5-ga16-ctx4096, is a specialized 4 billion parameter language model. It is a fine-tuned variant of the Qwen/Qwen3-4B-Base architecture, developed by PetarKal. The model was trained using the TRL (Transformers Reinforcement Learning) framework, indicating a focus on optimizing its generative outputs through specific training methodologies.

Key Capabilities

  • Specialized Generation: Primarily designed and fine-tuned for generating ASCII art.
  • Base Model: Built upon the robust Qwen3-4B-Base architecture.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for more complex and detailed input processing for art generation.
  • Training Framework: Utilizes TRL for its training procedure, suggesting a focus on refining specific output behaviors.

When to Use This Model

This model is particularly well-suited for use cases where the primary objective is the creation of ASCII art from textual prompts. Its fine-tuning specifically targets this niche, making it a strong candidate for:

  • Creative applications involving text-to-ASCII art conversion.
  • Generating unique visual text representations.
  • Experiments in specialized generative AI outputs beyond standard text completion.