AgnivaSaha/model_sft_lora
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Warm

AgnivaSaha/model_sft_lora is a 1.5 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-1.5B-Instruct. This model leverages a 32768-token context length and was trained using the TRL framework. It is designed for general text generation tasks, offering a compact yet capable solution for conversational AI and instruction following.

Loading preview...