PetarKal/Qwen3-4B-Base-ascii-art-v7-phase2-generation is a 4 billion parameter Qwen3-based language model fine-tuned by PetarKal for generating ASCII art. This model builds upon a previous version focused on understanding ASCII art, now specializing in its creation. It is designed for tasks requiring the generation of text-based visual representations, leveraging its 32768 token context length for complex patterns.
Loading preview...
Model Overview
PetarKal/Qwen3-4B-Base-ascii-art-v7-phase2-generation is a 4 billion parameter language model, fine-tuned from PetarKal/Qwen3-4B-Base-ascii-art-v6-phase1-understanding. This iteration focuses specifically on the generation of ASCII art, building on its predecessor's capability for understanding such patterns. The model was trained using the TRL (Transformers Reinforcement Learning) library, indicating a specialized training approach to optimize its generative capabilities.
Key Capabilities
- ASCII Art Generation: Specialized in creating text-based visual art.
- Fine-tuned Qwen3-4B Base: Leverages the foundational strengths of the Qwen3 architecture with 4 billion parameters.
- TRL Training: Utilizes advanced training techniques for improved performance in its niche task.
Training Details
The model underwent a Supervised Fine-Tuning (SFT) process. Key framework versions used during training include TRL 0.29.1, Transformers 5.5.0, Pytorch 2.10.0, Datasets 4.8.4, and Tokenizers 0.22.2. Training progress and metrics can be visualized via Weights & Biases.
Good For
- Creative Text Generation: Ideal for applications requiring the programmatic creation of ASCII art.
- Specialized Content Creation: Useful for developers and artists exploring text-based visual mediums.
- Research in Generative Models: Provides a specific example of fine-tuning for a unique generative task.