Weyaxi/Nebula-v2-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 11, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

Nebula-v2-7B is a 7 billion parameter language model developed by PulsarAI, fine-tuned from Mistral-7B-v0.1. This model is designed for general language generation tasks, leveraging the Mistral architecture for efficient performance. It offers a 4096-token context length, making it suitable for applications requiring moderate input and output lengths. The model's fine-tuning aims to enhance its capabilities beyond the base Mistral model.

Loading preview...

Nebula-v2-7B Overview

Nebula-v2-7B is a 7 billion parameter language model developed by PulsarAI. It is a fine-tuned version of the mistralai/Mistral-7B-v0.1 base model, inheriting its efficient architecture and performance characteristics. This model is designed to provide enhanced capabilities for various natural language processing tasks.

Key Characteristics

  • Base Model: Fine-tuned from mistralai/Mistral-7B-v0.1.
  • Parameter Count: 7 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 4096 tokens, suitable for processing and generating moderately long texts.

Usage and Adaptability

Nebula-v2-7B provides its original weights, allowing developers to integrate it into their applications. For those interested in further customization or specific use cases, the original LoRA (Low-Rank Adaptation) weights are also available. These adapter weights can be accessed separately, enabling more efficient fine-tuning or deployment in resource-constrained environments.