Baon2024/Qwen2.5-0.5B-Instruct-sft-77 is a 0.5 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-0.5B-Instruct. Developed by Baon2024, this model leverages Supervised Fine-Tuning (SFT) using the TRL library. It is designed for general text generation tasks, offering a compact solution for applications requiring an instruction-following model with a 131072 token context length.
No reviews yet. Be the first to review!