sampluralis/llama-sft-masked
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 10, 2026Architecture:Transformer Warm

The sampluralis/llama-sft-masked model is a 1 billion parameter language model fine-tuned using TRL. This model is based on an unspecified Llama architecture and has been trained with Supervised Fine-Tuning (SFT) techniques. It is designed for text generation tasks, particularly conversational responses, and supports a context length of 32768 tokens.

Loading preview...