indischepartij/MiaAffogato-Indo-Mistral-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 3, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The indischepartij/MiaAffogato-Indo-Mistral-7b is a 7 billion parameter language model based on the Mistral architecture. It achieves an average score of 70.83 on the Open LLM Leaderboard, with strong performance in reasoning and common sense tasks. This model is suitable for general language understanding and generation tasks, particularly where a balance of performance and efficiency is desired.

Loading preview...

Model Overview

The indischepartij/MiaAffogato-Indo-Mistral-7b is a 7 billion parameter language model built upon the Mistral architecture. While specific development and training details are marked as "More Information Needed" in its model card, its performance metrics are publicly available on the Hugging Face Open LLM Leaderboard.

Key Performance Metrics

This model demonstrates competitive performance across various benchmarks, achieving an overall average score of 70.83 on the Open LLM Leaderboard. Notable scores include:

  • AI2 Reasoning Challenge (25-Shot): 66.38
  • HellaSwag (10-Shot): 85.43
  • MMLU (5-Shot): 64.11
  • TruthfulQA (0-shot): 58.18
  • Winogrande (5-shot): 83.19
  • GSM8k (5-shot): 67.70

These results indicate a balanced capability in reasoning, common sense, and general knowledge tasks.

Potential Use Cases

Given its 7 billion parameters and benchmark performance, this model is suitable for a range of applications requiring robust language understanding and generation, including:

  • General-purpose text generation
  • Question answering
  • Summarization
  • Reasoning tasks where a smaller, efficient model is preferred over larger alternatives.