ChaoticNeutrals/Prodigy_7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 26, 2024License:otherArchitecture:Transformer0.0K Cold

ChaoticNeutrals/Prodigy_7B is a 7 billion parameter language model created by ChaoticNeutrals, merged using the SLERP method from macadeliccc/WestLake-7B-v2-laser-truthy-dpo and ChaoticNeutrals/This_is_fine_7B. This model demonstrates strong general reasoning capabilities, achieving an average score of 73.68 on the Open LLM Leaderboard, with notable performance in HellaSwag and Winogrande benchmarks. It is suitable for a range of general-purpose language generation and understanding tasks.

Loading preview...

Prodigy_7B: A Merged 7B Language Model

Prodigy_7B is a 7 billion parameter language model developed by ChaoticNeutrals, created through a strategic merge of two base models: macadeliccc/WestLake-7B-v2-laser-truthy-dpo and ChaoticNeutrals/This_is_fine_7B. This merge was performed using the SLERP (Spherical Linear Interpolation) method, a technique known for effectively combining the strengths of different models.

Key Capabilities & Performance

Prodigy_7B exhibits robust performance across various benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 73.68, indicating strong general language understanding and reasoning abilities. Specific benchmark results include:

  • AI2 Reasoning Challenge (25-Shot): 71.59
  • HellaSwag (10-Shot): 88.09
  • MMLU (5-Shot): 64.92
  • TruthfulQA (0-shot): 68.57
  • Winogrande (5-shot): 84.53
  • GSM8k (5-shot): 64.37

These scores highlight its proficiency in common sense reasoning, factual recall, and problem-solving tasks.

Use Cases

Prodigy_7B is well-suited for applications requiring a capable 7B model with balanced performance across diverse tasks. Its strong benchmark results suggest it can be effectively used for:

  • General text generation and completion
  • Question answering
  • Reasoning and logical inference tasks
  • Summarization and content creation