migtissera/Synthia-70B

TEXT GENERATIONConcurrency Cost:4Model Size:69BQuant:FP8Ctx Length:32kPublished:Aug 22, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

Synthia-70B by migtissera is a 69 billion parameter Llama-2-70B based large language model, fine-tuned on Orca-style datasets. It specializes in instruction following and long-form conversational capabilities, offering a 32768 token context length. The model demonstrates strong performance across various benchmarks, including an average score of 0.7132 on tasks like ARC Challenge, HellaSwag, MMLU, and TruthfulQA.

Loading preview...

Synthia-70B: An Instruction-Following Llama-2 Model

migtissera's Synthia-70B is a 69 billion parameter large language model built upon the Llama-2-70B architecture. It has undergone fine-tuning using Orca-style datasets, which significantly enhances its ability to follow instructions and engage in extended, coherent conversations. The model supports a substantial context length of 32768 tokens.

Key Capabilities & Performance

Synthia-70B is designed for robust instruction following and generating detailed, factual responses in long-form conversational settings. Evaluation using the EleutherAI Language Model Evaluation Harness, aligned with the HuggingFaceH4 Open LLM Leaderboard metrics, shows competitive performance:

  • ARC Challenge (acc_norm): 0.6945
  • HellaSwag (acc_norm): 0.8711
  • MMLU (acc_norm): 0.6891
  • TruthfulQA (mc2): 0.5979
  • Overall Average: 0.7132

Considerations for Use

As an uncensored model, Synthia-70B aims for factual accuracy but may occasionally produce misleading or inappropriate content. Users should exercise caution and cross-verify information. The model's license and usage are subject to the original Llama-2 terms.