PetarKal/Qwen3-4B-Base-ascii-art-v5dd-e3-lr5e-5-ga16-ctx4096
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Warm

PetarKal/Qwen3-4B-Base-ascii-art-v5dd-e3-lr5e-5-ga16-ctx4096 is a 4 billion parameter language model, fine-tuned by PetarKal from the Qwen3-4B-Base architecture. This model has a context length of 32768 tokens and was trained using SFT with the TRL framework. It is specifically optimized for generating text, likely with a focus on creative or conversational applications given its base model and fine-tuning approach.

Loading preview...