Weyaxi/zephyr-beta-Nebula-v2-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 12, 2023License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

Weyaxi/zephyr-beta-Nebula-v2-7B is a 7 billion parameter language model, created by merging HuggingFaceH4/zephyr-7b-beta and PulsarAI/Nebula-v2-7B-Lora. This model leverages the strengths of its merged components to provide enhanced performance. It is designed for general language understanding and generation tasks, offering a balanced approach to various NLP applications.

Loading preview...

Overview

Weyaxi/zephyr-beta-Nebula-v2-7B is a 7 billion parameter language model resulting from the strategic merge of two distinct models: HuggingFaceH4/zephyr-7b-beta and PulsarAI/Nebula-v2-7B-Lora. This merging approach aims to combine the respective strengths of its base models, potentially leading to improved performance across a range of natural language processing tasks.

Key Characteristics

  • Architecture: A merged model combining zephyr-7b-beta and Nebula-v2-7B-Lora.
  • Parameter Count: 7 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports an 8192-token context window, suitable for handling moderately long inputs and generating coherent responses.

Performance

While specific benchmark scores are not detailed in the provided README, the model is listed on the Open LLM Leaderboard, indicating its participation in standardized evaluations. Users interested in detailed performance metrics such as ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, GSM8K, and DROP should refer to the Open LLM Leaderboard for the most up-to-date results.

Good for

  • General-purpose text generation and understanding.
  • Applications requiring a 7B parameter model with an extended context window.
  • Experimentation with merged model architectures.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p