Baon2024/Qwen2.5-0.5B-Instruct-sft-77
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Jan 7, 2026Architecture:Transformer Warm

Baon2024/Qwen2.5-0.5B-Instruct-sft-77 is a 0.5 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. Developed by Baon2024, this model leverages Supervised Fine-Tuning (SFT) using the TRL library. It is designed for general text generation tasks, offering a compact solution for applications requiring an instruction-following model with a 131072 token context length.

Loading preview...