Weyaxi/CollectiveCognition-v1.1-Nebula-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Oct 8, 2023License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

Weyaxi/CollectiveCognition-v1.1-Nebula-7B is a 7 billion parameter language model, a merge of teknium/CollectiveCognition-v1.1-Mistral-7B and PulsarAI/Nebula-7B, offering an 8192-token context length. This model demonstrates balanced performance across various benchmarks, including MMLU and HellaSwag, making it suitable for general-purpose language understanding and generation tasks. Its merged architecture aims to combine strengths from its constituent models for improved overall capability.

Loading preview...

CollectiveCognition-v1.1-Nebula-7B Overview

CollectiveCognition-v1.1-Nebula-7B is a 7 billion parameter language model resulting from a merge of two distinct models: teknium/CollectiveCognition-v1.1-Mistral-7B and PulsarAI/Nebula-7B. This integration aims to leverage the strengths of both base models, providing a versatile tool for various natural language processing applications. The model supports an 8192-token context length, allowing for processing longer inputs and generating more coherent, extended outputs.

Performance Highlights

Evaluated on the Open LLM Leaderboard, CollectiveCognition-v1.1-Nebula-7B achieves an average score of 53.79. Key benchmark results include:

  • ARC (25-shot): 58.11
  • HellaSwag (10-shot): 82.39
  • MMLU (5-shot): 57.03
  • TruthfulQA (0-shot): 53.53
  • Winogrande (5-shot): 73.72
  • GSM8K (5-shot): 9.55
  • DROP (3-shot): 42.17

These scores indicate a balanced capability across reasoning, common sense, and factual recall tasks, making it a solid choice for general-purpose applications.

Good For

  • General text generation: Creating coherent and contextually relevant text.
  • Question answering: Responding to queries based on provided context or general knowledge.
  • Reasoning tasks: Handling tasks that require logical inference, as indicated by its ARC and MMLU scores.
  • Common sense understanding: Demonstrating good performance on HellaSwag and Winogrande benchmarks.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p