georgesung/llama2_7b_openorca_35k
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold

georgesung/llama2_7b_openorca_35k is a 7 billion parameter Llama-2 model fine-tuned by georgesung. It was trained using QLoRA on a 35k subset of the OpenOrca dataset, optimizing it for instruction-following and helpful AI assistant tasks. This model is designed for general-purpose conversational AI applications, leveraging its fine-tuning on diverse instruction data.

Loading preview...