Weyaxi/Limarp-Platypus2-13B-QLoRA-0.80-epoch

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

Weyaxi/Limarp-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model, created by merging Oniichat/limarp-13b-merged and Weyaxi/Platypus2-13B-QLoRA-0.80-epoch. This model demonstrates a balanced performance across various benchmarks, including MMLU and HellaSwag, with a context length of 4096 tokens. It is suitable for general-purpose language understanding and generation tasks, offering a solid foundation for diverse applications.

Loading preview...

Model Overview

Weyaxi/Limarp-Platypus2-13B-QLoRA-0.80-epoch is a 13 billion parameter language model resulting from the merge of two distinct models: Oniichat/limarp-13b-merged and Weyaxi/Platypus2-13B-QLoRA-0.80-epoch. This merging strategy aims to combine the strengths of its constituent models, providing a versatile tool for various natural language processing tasks.

Performance Benchmarks

Evaluated on the Open LLM Leaderboard, this model exhibits competitive performance across several key metrics. Its average score is 47.74, indicating a balanced capability. Notable scores include:

  • ARC (25-shot): 60.49
  • HellaSwag (10-shot): 82.76
  • MMLU (5-shot): 56.52
  • TruthfulQA (0-shot): 44.14
  • Winogrande (5-shot): 76.8

These results suggest proficiency in common sense reasoning, reading comprehension, and general knowledge tasks. The model operates with a context length of 4096 tokens, making it suitable for processing moderately long inputs.

Use Cases

Given its balanced benchmark performance, Limarp-Platypus2-13B-QLoRA-0.80-epoch is well-suited for:

  • General text generation and completion
  • Question answering
  • Summarization
  • Reasoning tasks where a broad understanding of language is required