PetarKal/Qwen3-4B-ascii-art-curated-mix-full-e3-lr3e-5-ga16-ctx4096

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 10, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-ascii-art-curated-mix-full-e3-lr3e-5-ga16-ctx4096 is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Base. This model is specifically optimized for generating ASCII art, leveraging its training with the TRL framework. It supports a context length of 32768 tokens, making it suitable for tasks requiring detailed ASCII art generation based on extensive input.

Loading preview...

Model Overview

This model, PetarKal/Qwen3-4B-ascii-art-curated-mix-full-e3-lr3e-5-ga16-ctx4096, is a specialized fine-tuned version of the Qwen3-4B-Base architecture. Developed by PetarKal, it has been trained using the TRL (Transformers Reinforcement Learning) framework, indicating a focus on specific task optimization rather than general-purpose language understanding.

Key Capabilities

  • ASCII Art Generation: The primary differentiator of this model is its fine-tuning for generating ASCII art. This suggests enhanced performance and creativity in producing text-based visual representations.
  • Base Model: Built upon the Qwen3-4B-Base, it inherits a robust foundation for language processing, which is then adapted for its specialized task.
  • Training Framework: Utilizes TRL for its training procedure, which is often employed for fine-tuning models on specific objectives or styles.

Good For

  • Creative Text-to-ASCII Art: Ideal for applications requiring the conversion of textual descriptions or prompts into ASCII art.
  • Specialized Content Generation: Suitable for developers and artists looking to integrate unique, text-based visual elements into their projects or tools.
  • Exploration of Fine-tuning: Provides an example of how base models can be effectively fine-tuned for niche, creative applications using frameworks like TRL.