lu-vae/llama2-13B-sharegpt4-orca-openplatypus-8w
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 14, 2023License:llama2Architecture:Transformer Open Weights Cold

The lu-vae/llama2-13B-sharegpt4-orca-openplatypus-8w model is a 13 billion parameter language model based on the Llama 2 architecture, quantized to 8-bit precision. It is fine-tuned on a diverse dataset including ShareGPT4, Orca, and OpenPlatypus, enhancing its general conversational and instruction-following capabilities. With a context length of 4096 tokens, this model is optimized for efficient deployment and inference while maintaining strong performance across various natural language processing tasks.

Loading preview...