sampluralis/llama-sft
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 2, 2026Architecture:Transformer Warm

The sampluralis/llama-sft model is a fine-tuned language model based on an unspecified Llama architecture, developed by sampluralis. This model has been trained using the TRL (Transformers Reinforcement Learning) framework, specifically employing Supervised Fine-Tuning (SFT) techniques. It is designed for text generation tasks, offering capabilities for conversational AI and question answering. The model's training methodology focuses on enhancing its ability to generate coherent and contextually relevant responses.

Loading preview...