Weyaxi/2x-LoRA-Assemble-Platypus2-13B

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

Weyaxi/2x-LoRA-Assemble-Platypus2-13B is a 13 billion parameter language model based on the Platypus2 architecture, fine-tuned for general language understanding and generation. It demonstrates competitive performance across various benchmarks, including ARC, HellaSwag, and MMLU. This model is suitable for tasks requiring strong reasoning and factual recall, offering a balanced performance profile for its size.

Loading preview...

Model Overview

Weyaxi/2x-LoRA-Assemble-Platypus2-13B is a 13 billion parameter language model built upon the Platypus2 architecture. This model has been evaluated on the Open LLM Leaderboard, showcasing its capabilities across a range of academic benchmarks.

Key Performance Metrics

The model achieves an average score of 51.13 on the Open LLM Leaderboard. Notable individual benchmark results include:

  • ARC (25-shot): 60.58
  • HellaSwag (10-shot): 82.56
  • MMLU (5-shot): 58.25
  • TruthfulQA (0-shot): 54.77
  • Winogrande (5-shot): 74.9
  • GSM8K (5-shot): 0.91
  • DROP (3-shot): 25.96

These scores indicate a strong performance in common sense reasoning, reading comprehension, and general knowledge tasks, making it a versatile option for various NLP applications.

Potential Use Cases

  • General text generation: Creating coherent and contextually relevant text.
  • Question Answering: Leveraging its strong performance on benchmarks like ARC and MMLU.
  • Reasoning tasks: Applicable for scenarios requiring logical inference and problem-solving.