PetarKal/Qwen3-4B-Base-ascii-art-v5-e3-lr1e-4-ga16-ctx4096

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-Base-ascii-art-v5-e3-lr1e-4-ga16-ctx4096 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Base. Developed by PetarKal, this model specializes in generating ASCII art, leveraging a 32768-token context length. It was trained using the TRL framework with SFT, focusing on creative text generation tasks involving ASCII art. This model is optimized for applications requiring stylized textual outputs rather than general-purpose language understanding.

Loading preview...

Model Overview

PetarKal/Qwen3-4B-Base-ascii-art-v5-e3-lr1e-4-ga16-ctx4096 is a specialized 4 billion parameter language model, fine-tuned from the base Qwen3-4B-Base architecture. Its primary distinction lies in its fine-tuning for generating ASCII art, making it unique among general-purpose LLMs. The model was developed by PetarKal and trained using the TRL (Transformers Reinforcement Learning) framework with Supervised Fine-Tuning (SFT).

Key Capabilities

  • Specialized ASCII Art Generation: This model is specifically optimized for creating textual representations of images or concepts using ASCII characters.
  • Qwen3-4B-Base Foundation: Benefits from the robust architecture of the Qwen3-4B-Base model.
  • Extended Context Length: Features a context window of 32768 tokens, allowing for more complex and detailed ASCII art generation or longer input prompts.
  • TRL Framework: Training utilized the TRL library, indicating potential for further reinforcement learning applications or advanced fine-tuning techniques.

Good For

  • Creative Text Applications: Ideal for projects requiring the generation of stylized text or ASCII art.
  • Niche Content Creation: Suitable for developers looking to integrate unique visual elements into text-based interfaces or applications.
  • Experimentation with Stylized Output: Provides a strong base for exploring the capabilities of LLMs in non-standard text generation tasks.

This model differentiates itself by focusing on a highly specific creative output, moving beyond typical conversational or factual generation to deliver artistic textual content.