migtissera/Synthia-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 17, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

Synthia-7B by migtissera is a 7 billion parameter Llama-2-based causal language model. It is fine-tuned on Orca-style datasets, specializing in instruction following and long-form conversational capabilities. This model is designed to provide helpful, detailed, and uncensored responses, making it suitable for interactive AI applications.

Loading preview...

Synthia-7B: An Instruction-Following Llama-2 Model

Synthia-7B is a 7 billion parameter language model developed by migtissera, built upon the Llama-2 architecture. It has been fine-tuned using Orca-style datasets, which emphasizes learning from complex explanation traces, enabling it to excel in instruction following and engaging in extended, detailed conversations.

Key Capabilities & Features

  • Instruction Following: Optimized to accurately understand and execute user instructions.
  • Long-Form Conversations: Capable of maintaining coherent and detailed dialogues over multiple turns.
  • Uncensored Responses: Designed to provide direct and factual answers without content filtering.
  • Llama-2 Base: Benefits from the robust foundation of the Llama-2 model family.

Performance Highlights

Evaluated using the EleutherAI Language Model Evaluation Harness, Synthia-7B achieved a total average score of 57.53 on key metrics used by the HuggingFaceH4 Open LLM Leaderboard. Specific scores include:

  • ARC Challenge: 56.14 (acc_norm)
  • HellaSwag: 78.6 (acc_norm)
  • MMLU: 50.35 (acc_norm)
  • TruthfulQA: 45.03 (mc2)

Limitations

Users should be aware that while the model aims for accuracy, it may occasionally produce inaccurate or misleading information. Despite efforts to refine training data, there is a possibility of generating inappropriate, biased, or offensive content due to its uncensored nature. Cross-checking information is advised.