Lili85/llama2-7b-kde4-full
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 3, 2026Architecture:Transformer Cold

Lili85/llama2-7b-kde4-full is a 7 billion parameter Llama 2-based causal language model fine-tuned by Lili85. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for general text generation tasks, leveraging its Llama 2 foundation for broad applicability. The model has a context length of 4096 tokens.

Loading preview...