HWERI/llama2-exams-orca-sharegpt
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 18, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

HWERI/llama2-exams-orca-sharegpt is a 7 billion parameter Llama2-based causal language model, fine-tuned on a combination of ShareGPT, the exams dataset, and a subset of the Orca dataset. This model, with a 4096-token context length, is optimized for conversational AI and instruction-following tasks, leveraging diverse high-quality instruction data. Its training methodology focuses on improving performance across general conversational abilities and specific knowledge-based queries.

Loading preview...