ddobokki/Llama-2-70b-orca-200k

TEXT GENERATIONConcurrency Cost:4Model Size:69BQuant:FP8Ctx Length:32kPublished:Aug 3, 2023Architecture:Transformer0.0K Cold

The ddobokki/Llama-2-70b-orca-200k model is a 69 billion parameter language model based on the Llama-2 architecture. It has been fine-tuned using a 200k sample of the OpenOrca dataset, specializing in instruction-following and conversational tasks. This model is designed for general-purpose text generation and understanding, particularly excelling in response quality due to its Orca-based training.

Loading preview...

Model Overview

The ddobokki/Llama-2-70b-orca-200k is a 69 billion parameter model built upon the Llama-2 architecture. This model distinguishes itself through its fine-tuning process, which utilized a 200k sample from the OpenOrca dataset. This targeted training aims to enhance its instruction-following capabilities and conversational fluency.

Key Characteristics

  • Architecture: Llama-2 base model.
  • Parameter Count: 69 billion parameters.
  • Training Data: Fine-tuned with a 200k sample from the OpenOrca dataset, focusing on high-quality instruction-tuning data.
  • Context Length: Supports a context length of 32768 tokens.

Intended Use Cases

This model is particularly well-suited for applications requiring robust instruction-following and nuanced conversational interactions. Its training on the OpenOrca dataset suggests strong performance in:

  • General-purpose text generation.
  • Question answering and dialogue systems.
  • Tasks benefiting from detailed and accurate responses based on given instructions.

Prompt Format

The model expects a specific prompt template for optimal performance:

### Human: {Human}
### Assistant: {Assistant}

This format helps the model understand the distinction between user input and its expected output, leading to more coherent and relevant responses.