AgnivaSaha/model_sft_dare
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Warm

AgnivaSaha/model_sft_dare is a 1.5 billion parameter instruction-tuned model developed by AgnivaSaha, featuring a substantial 32768-token context length. This model is designed for general language understanding and generation tasks, leveraging its large context window for processing extensive inputs. Its primary utility lies in applications requiring the comprehension and production of long-form text.

Loading preview...