Quaxicron/test2
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Feb 23, 2026Architecture:Transformer Warm

Quaxicron/test2 is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. This model was trained using Supervised Fine-Tuning (SFT) with the TRL library. It is designed for general text generation tasks, leveraging its base Qwen2.5 architecture and a 32768 token context length.

Loading preview...