ichigoberry/pandafish-dt-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Apr 3, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ichigoberry/pandafish-dt-7b is a 7 billion parameter language model created by ichigoberry, resulting from a 'dare_ties' merge of Experiment26-7B and MergeCeption-7B-v3 using LazyMergekit. This model demonstrates strong performance in general reasoning and factual recall, achieving competitive scores on the Nous Benchmark suite, particularly in GPT4All and TruthfulQA. It is designed for general-purpose conversational AI and text generation tasks, offering a balanced performance profile.

Loading preview...

pandafish-dt-7b: A Merged 7B Language Model

pandafish-dt-7b is a 7 billion parameter language model developed by ichigoberry, created through a 'dare_ties' merge of Experiment26-7B and MergeCeption-7B-v3 using the LazyMergekit framework. This merging technique aims to combine the strengths of its constituent models to achieve enhanced performance.

Key Capabilities & Performance

This model has been evaluated on the Nous Benchmark suite, demonstrating competitive results among 7B models. Notably, pandafish-dt-7b achieves:

  • 77.19 on GPT4All, indicating strong general knowledge and reasoning.
  • 78.41 on TruthfulQA, suggesting good factual accuracy and resistance to hallucination.
  • A respectable average score of 62.65 across the benchmark suite, placing it among top-performing merged models like AlphaMonarch-7B and Monarch-7B.

Usage and Availability

pandafish-dt-7b is readily available for use, with a Huggingface Space playground for easy interaction. For deployment, various quantized versions are provided:

  • GGUF: Available from ichigoberry and mradermacher (with IQ).
  • MLX: 4-bit and 8-bit versions are available from the MLX community.

Good for

  • General-purpose text generation: Excels in producing coherent and contextually relevant text.
  • Question Answering: Strong performance in factual recall and answering diverse queries.
  • Reasoning tasks: Its competitive benchmark scores suggest good capabilities in general reasoning scenarios.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p