sampluralis/llama-sft-proj
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 4, 2026Architecture:Transformer Warm

The sampluralis/llama-sft-proj model is a fine-tuned language model developed by sampluralis, based on an unspecified Llama architecture. It has been trained using the TRL (Transformers Reinforcement Learning) library, focusing on supervised fine-tuning (SFT). This model is designed for general text generation tasks, particularly for conversational or question-answering applications where instruction following is key.

Loading preview...