sampluralis/llama-sft-baseline
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 9, 2026Architecture:Transformer Warm

The sampluralis/llama-sft-baseline is a 1 billion parameter language model, fine-tuned from gshasiri/SmolLM3-Mid. Developed by sampluralis, this model was trained using the TRL library for supervised fine-tuning (SFT). It is designed for general text generation tasks, particularly conversational responses, and supports a context length of 32768 tokens.

Loading preview...