Weyaxi/Nebula-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 4, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Nebula-7B is a 7 billion parameter language model developed by PulsarAI, fine-tuned from Mistral-7B-v0.1. This model demonstrates a strong average performance of 53.93 on the Open LLM Leaderboard, with notable scores in HellaSwag and Winogrande. It is designed for general-purpose language understanding and generation tasks, offering a balanced capability set for various applications.

Loading preview...

Nebula-7B Overview

Nebula-7B is a 7 billion parameter language model developed by PulsarAI, built upon the foundational mistralai/Mistral-7B-v0.1 architecture. This model has been fine-tuned to achieve a balanced performance across a range of benchmarks, making it suitable for diverse natural language processing tasks.

Key Capabilities & Performance

The model's performance is officially evaluated on the Open LLM Leaderboard, where it achieved an average score of 53.93. Specific benchmark results include:

  • ARC (25-shot): 59.3
  • HellaSwag (10-shot): 83.46
  • MMLU (5-shot): 57.0
  • TruthfulQA (0-shot): 45.56
  • Winogrande (5-shot): 76.4
  • GSM8K (5-shot): 14.86
  • DROP (3-shot): 40.96

These scores indicate strong performance in common sense reasoning (HellaSwag, Winogrande) and general knowledge (MMLU), while also showing capabilities in complex reasoning and question answering. The original Lora weights for Nebula-7B are also available from PulsarAI/Nebula-7B-Lora.

Good For

  • General-purpose text generation and understanding.
  • Applications requiring balanced performance across various NLP tasks.
  • Researchers and developers looking for a fine-tuned Mistral-7B variant.