speechlessai/speechless-llama2-dolphin-orca-platypus-13b

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 16, 2023Architecture:Transformer0.0K Warm

The speechlessai/speechless-llama2-dolphin-orca-platypus-13b is a 13 billion parameter language model fine-tuned from Meta's Llama-2-13b-hf. It leverages a unique blend of Dolphin (2% GPT4), Orca (2% GPT4), and Platypus (40%) datasets to enhance its instruction-following and reasoning capabilities. This model is designed for general-purpose text generation and dialogue use cases, offering improved performance on academic benchmarks compared to its base model.

Loading preview...

Model Overview

The speechless-llama2-dolphin-orca-platypus-13b is a 13 billion parameter language model developed by speechlessai. It is a fine-tuned version of Meta's Llama-2-13b-hf, specifically trained on a composite dataset including Dolphin (2% GPT4), Orca (2% GPT4), and Platypus (40%) data. This strategic fine-tuning aims to bolster the model's instruction-following and general reasoning abilities.

Key Capabilities

  • Enhanced Instruction Following: Benefits from diverse instruction-tuned datasets.
  • Improved Reasoning: Shows competitive performance across various academic benchmarks.
  • General Text Generation: Capable of generating coherent and contextually relevant text.
  • Dialogue Optimization: Suitable for assistant-like chat applications, building upon Llama 2's chat optimizations.

Performance Highlights

The model demonstrates solid performance on several academic benchmarks:

  • ARC: 59.64
  • HellaSwag: 82.65
  • MMLU: 57.90
  • TruthfulQA: 43.44
  • Average Score: 60.91

Good For

  • General-purpose chatbots and conversational AI.
  • Instruction-based text generation tasks.
  • Applications requiring improved reasoning over base Llama 2 models.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p