42MARU/GenAI-llama-2-13b

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kArchitecture:Transformer0.0K Cold

42MARU/GenAI-llama-2-13b is a 13 billion parameter language model developed by 42MARU, built upon the LLaMA-2 backbone. It is fine-tuned on Orca-style and Platypus datasets, making it suitable for general conversational AI and question-answering tasks. This model demonstrates solid performance across various benchmarks, including ARC, HellaSwag, and MMLU, indicating its utility for diverse natural language understanding applications.

Loading preview...

Model Overview

42MARU/GenAI-llama-2-13b is a 13 billion parameter language model developed by 42MARU, leveraging the LLaMA-2 architecture. It has been fine-tuned using Orca-style and Platypus datasets, enhancing its capabilities for conversational AI and question-answering.

Key Capabilities & Performance

This model demonstrates competitive performance across several benchmarks on the Open LLM Leaderboard:

  • Avg. Score: 56.03
  • ARC (25-shot): 63.14
  • HellaSwag (10-shot): 83.64
  • MMLU (5-shot): 59.91
  • TruthfulQA (0-shot): 56.21
  • Winogrande (5-shot): 76.72
  • GSM8K (5-shot): 9.4
  • DROP (3-shot): 43.23

These scores indicate its proficiency in common sense reasoning, reading comprehension, and general knowledge tasks. The model utilizes a standard prompt template for user and assistant interactions.

Use Cases

Given its fine-tuning and benchmark performance, 42MARU/GenAI-llama-2-13b is well-suited for:

  • General-purpose conversational agents: Engaging in natural dialogue.
  • Question Answering systems: Providing informative responses based on context.
  • Text generation tasks: Creating coherent and relevant text.
  • Educational applications: Assisting with understanding and knowledge retrieval.