davzoku/cria-llama2-7b-v1.3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 14, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

The davzoku/cria-llama2-7b-v1.3 model is a 7 billion parameter Llama 2-based language model fine-tuned by davzoku using QLoRA (4-bit precision) on the mlabonne/CodeLlama-2-20k dataset. This model is designed as the backbone for an end-to-end chatbot system, focusing on instruction-tuning for conversational AI applications. It leverages a 4096-token context length and is optimized for deployment in web frameworks like Next.js.

Loading preview...