PetarKal/Qwen3-4B-Base-ascii-art-v5-lr2e-5-ga16-ctx4096

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 23, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-Base-ascii-art-v5-lr2e-5-ga16-ctx4096 is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B-Base. This model specializes in generating ASCII art, leveraging its base architecture for creative text-based visual outputs. It was trained using the TRL framework and is optimized for tasks requiring artistic text generation within a 32768 token context window.

Loading preview...

Model Overview

This model, PetarKal/Qwen3-4B-Base-ascii-art-v5-lr2e-5-ga16-ctx4096, is a specialized fine-tuned version of the Qwen3-4B-Base architecture. Developed by PetarKal, it focuses on generating ASCII art, distinguishing it from general-purpose language models.

Key Capabilities

  • ASCII Art Generation: The primary capability of this model is to produce creative and structured ASCII art based on given prompts.
  • Qwen3-4B-Base Foundation: Benefits from the robust base capabilities of the Qwen3-4B architecture, adapted for a niche artistic application.
  • Extended Context Window: Supports a context length of 32768 tokens, allowing for more complex or detailed ASCII art requests.

Training Details

The model was fine-tuned using Supervised Fine-Tuning (SFT) with the TRL library. This training approach specifically tailored the model's outputs towards ASCII art generation, moving beyond the general text generation of its base model.

Good For

  • Creative Text-Based Art: Ideal for users looking to generate unique ASCII art for various applications, such as terminal interfaces, code comments, or artistic projects.
  • Exploration of Niche LLM Applications: Demonstrates the potential for fine-tuning large language models for highly specific and creative tasks outside of standard text generation or summarization.