Weyaxi/zephyr-alpha-Nebula-v2-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 12, 2023License:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

zephyr-alpha-Nebula-v2-7B is a 7 billion parameter language model, created by merging HuggingFaceH4/zephyr-7b-alpha and PulsarAI/Nebula-v2-7B-Lora. This model is designed for general language tasks, leveraging the strengths of its merged components. With an 8192 token context length, it is suitable for applications requiring moderate context understanding and generation.

Loading preview...

Overview

zephyr-alpha-Nebula-v2-7B is a 7 billion parameter language model resulting from the merge of two distinct models: HuggingFaceH4/zephyr-7b-alpha and PulsarAI/Nebula-v2-7B-Lora. This merging approach aims to combine the capabilities of its constituent models to offer a versatile solution for various natural language processing tasks.

Key Characteristics

  • Architecture: A merged model combining Zephyr-7B-Alpha and Nebula-v2-7B-Lora.
  • Parameter Count: 7 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports an 8192-token context window, enabling it to process and generate longer sequences of text.

Potential Use Cases

Given its merged nature and moderate parameter count, zephyr-alpha-Nebula-v2-7B is likely suitable for:

  • General text generation and completion.
  • Instruction-following tasks.
  • Conversational AI and chatbots.
  • Summarization and question-answering where the 8192-token context is beneficial.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p