PetarKal/Qwen3-4B-Base-ascii-art-v6-phase2c-generation-lr3e6
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 7, 2026Architecture:Transformer Cold

PetarKal/Qwen3-4B-Base-ascii-art-v6-phase2c-generation-lr3e6 is a 4 billion parameter language model, fine-tuned from PetarKal/Qwen3-4B-Base-ascii-art-v6-phase1-understanding. This model was trained using the TRL framework with SFT, building upon its predecessor's capabilities. It is designed for text generation tasks, leveraging a 32768 token context length.

Loading preview...