ResplendentAI/Flora_7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 5, 2024License:cc-by-sa-4.0Architecture:Transformer0.0K Open Weights Cold

Flora_7B is a 7 billion parameter language model developed by ResplendentAI, built using a linear merge method from jeiku/FloraBase and jeiku/Synthetic_Soul_1k_Mistral_128. This model demonstrates strong general reasoning capabilities, achieving an average score of 74.26 on the Open LLM Leaderboard, including 72.10 on AI2 Reasoning Challenge and 64.16 on MMLU. It is suitable for tasks requiring robust understanding and generation across various domains.

Loading preview...

Flora_7B: A Merged 7B Language Model

Flora_7B is a 7 billion parameter language model developed by ResplendentAI. It was created using a linear merge method, combining jeiku/FloraBase and jeiku/Synthetic_Soul_1k_Mistral_128 with a weight of 1 for the base model. This merging approach aims to leverage the strengths of its constituent models to produce a capable general-purpose LLM.

Key Capabilities & Performance

Flora_7B demonstrates solid performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard. Its average score is 74.26, indicating strong general reasoning and language understanding. Specific benchmark results include:

  • AI2 Reasoning Challenge (25-Shot): 72.10
  • HellaSwag (10-Shot): 88.31
  • MMLU (5-Shot): 64.16
  • TruthfulQA (0-shot): 71.19
  • Winogrande (5-shot): 84.45
  • GSM8k (5-shot): 65.35

These scores highlight its proficiency in common sense reasoning, factual recall, and mathematical problem-solving.

When to Use Flora_7B

Flora_7B is a suitable choice for developers seeking a 7B parameter model with balanced performance across various tasks. Its benchmark results suggest it can be effectively applied in scenarios requiring:

  • General text generation and comprehension.
  • Reasoning tasks and question answering.
  • Applications where a 7B model's efficiency is preferred over larger, more resource-intensive alternatives, without significant compromise on core capabilities.