ewqr2130/TinyLamma-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Jan 14, 2024License:apache-2.0Architecture:Transformer Open Weights Warm

The ewqr2130/TinyLamma-SFT is a 1.1 billion parameter language model built on the Llama architecture, fine-tuned for supervised instruction following. It is designed for efficient text generation tasks, leveraging Safetensors for optimized storage and loading. This model is suitable for applications requiring a compact yet capable language model for various text-based interactions.

Loading preview...