HWERI/Llama2-7b-openorca-mc-v2

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 23, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

HWERI/Llama2-7b-openorca-mc-v2 is a 7 billion parameter Llama2-based language model, fine-tuned specifically on a 10k subset of OpenOrca focusing on multiple-choice questions, augmented with 6k ShareGPT4 datasets. This model is optimized for tasks requiring strong multiple-choice question answering capabilities, achieving notable performance on benchmarks like ARC and HellaSwag. It is designed for applications where accurate selection from given options is critical.

Loading preview...

HWERI/Llama2-7b-openorca-mc-v2: Multiple Choice Optimized Llama2 Variant

HWERI/Llama2-7b-openorca-mc-v2 is a 7 billion parameter language model built upon the Llama2 architecture. Its key differentiator lies in its specialized fine-tuning process, which involved a 10,000-sample subset of the OpenOrca dataset, specifically curated for multiple-choice questions, combined with 6,000 samples from the ShareGPT4 dataset.

Key Capabilities & Performance

This model demonstrates strong performance in multiple-choice question answering scenarios, as reflected in its Open LLM Leaderboard evaluation results:

  • ARC (25-shot): Achieved 55.55
  • HellaSwag (10-shot): Scored 81.26
  • MMLU (5-shot): Reached 48.3
  • Winogrande (5-shot): Posted 72.85

While excelling in multiple-choice and common sense reasoning, its performance on mathematical reasoning (GSM8K) and reading comprehension (DROP) is lower, indicating a focused optimization for specific task types.

Ideal Use Cases

This model is particularly well-suited for applications requiring robust multiple-choice question answering, such as:

  • Educational assessment tools
  • Quiz generation and solving
  • Fact-checking systems where answers are presented as options
  • Any scenario where the primary task involves selecting the correct answer from a predefined set of choices.