bunnycore/Mnemosyne-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Mnemosyne-7B is an experimental 7 billion parameter large language model developed by bunnycore, created by merging several pre-trained models. It combines the strengths of models like Mistral-7B-Instruct-KhanAcademy-v0.2, Eurus-7b-kto, and Newton-7B to achieve a highly informative and comprehensive LLM. This model is designed for informative and educational purposes, aiming for broad knowledge integration.

Loading preview...

Mnemosyne-7B Overview

Mnemosyne-7B is an experimental 7 billion parameter large language model developed by bunnycore. It is constructed through a model merge using mergekit, combining the capabilities of several distinct pre-trained models. The base model for this merge is mistralai/Mistral-7B-Instruct-v0.2.

Key Capabilities

  • Informative and Educational Focus: Designed to integrate knowledge from multiple sources for comprehensive responses.
  • Merged Architecture: Combines MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2, openbmb/Eurus-7b-kto, and Weyaxi/Newton-7B to leverage their individual strengths.
  • Experimental Nature: Currently undergoing testing and evaluation to assess its full effectiveness and performance.

Good For

  • Research and Development: Ideal for developers and researchers interested in exploring merged model architectures and their emergent properties.
  • Knowledge Synthesis: Potentially useful for tasks requiring the integration of diverse information for educational or informative outputs.
  • Early-stage Prototyping: Suitable for experimental applications where a broad, comprehensive knowledge base is desired, with an understanding of its experimental status.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p