aloobun/llama2-7b-guanaco
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The aloobun/llama2-7b-guanaco model is a 7 billion parameter Llama-2-chat-hf variant, fine-tuned using QLoRA (4-bit precision) on the mlabonne/guanaco-llama2-1k dataset. This model is primarily intended for educational purposes, demonstrating fine-tuning techniques on a subset of the OpenAssistant/oasst1 dataset.

Loading preview...