Vivian12300/llama-2-7b-chat-hf-mmlu
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 12, 2024License:llama2Architecture:Transformer Open Weights Cold

Vivian12300/llama-2-7b-chat-hf-mmlu is a 7 billion parameter Llama-2-chat-hf model fine-tuned by Vivian12300. This model is specifically adapted from the Meta Llama-2-7b-chat-hf architecture, focusing on performance related to the MMLU benchmark. It is intended for tasks requiring strong general knowledge and reasoning capabilities, leveraging its 4096 token context length.

Loading preview...