Weyaxi/MythicalDestroyerV2-Platypus2-13B-QLora-0.80-epoch

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

Weyaxi/MythicalDestroyerV2-Platypus2-13B-QLora-0.80-epoch is a 13 billion parameter language model, fine-tuned using QLoRA, based on the Platypus2 architecture. This model demonstrates a strong average performance of 47.95 on the Open LLM Leaderboard, with notable scores in HellaSwag (81.24) and Winogrande (73.88). It is primarily suited for general language understanding and generation tasks, particularly those requiring strong common sense reasoning and factual recall.

Loading preview...

Model Overview

Weyaxi/MythicalDestroyerV2-Platypus2-13B-QLora-0.80-epoch is a 13 billion parameter language model, fine-tuned with QLoRA, designed for general-purpose language tasks. This model is based on the Platypus2 architecture and has undergone training for 0.80 epochs.

Performance Highlights

The model's performance has been evaluated on the Open LLM Leaderboard, achieving an average score of 47.95. Key benchmark results include:

  • ARC (25-shot): 57.34
  • HellaSwag (10-shot): 81.24
  • MMLU (5-shot): 55.64
  • TruthfulQA (0-shot): 55.98
  • Winogrande (5-shot): 73.88

These scores indicate a solid capability in common sense reasoning, factual recall, and general language understanding. The model shows particular strength in tasks like HellaSwag and Winogrande, which assess natural language inference and disambiguation.

Use Cases

This model is well-suited for applications requiring:

  • General text generation: Creating coherent and contextually relevant text.
  • Question answering: Responding to queries based on its training data.
  • Reasoning tasks: Handling tasks that involve common sense and logical inference.

Users should note the GSM8K (5-shot) score of 0.0, indicating that this model is not optimized for complex mathematical reasoning or problem-solving.