rmdhirr/Foxglove_7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 7, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Foxglove_7B by rmdhirr is a 7 billion parameter merged language model, specifically optimized for roleplay (RP) tasks. It excels at maintaining character consistency, adhering to character cards, and proficiently following desired markdown formatting. This model is a merge of ResplendentAI/Datura_7B and Epiculous/Mika-7B, making it a strong candidate for applications requiring nuanced conversational and narrative generation.

Loading preview...

Foxglove_7B Overview

Foxglove_7B is a 7 billion parameter language model developed by rmdhirr, primarily designed and optimized for roleplay (RP) scenarios. This model is a strategic merge of two base models, ResplendentAI/Datura_7B and Epiculous/Mika-7B, utilizing a slerp merge method with specific layer-wise parameter tuning.

Key Capabilities

  • Roleplay Proficiency: Demonstrates strong capabilities in maintaining character consistency and adhering to character cards.
  • Markdown Adherence: Proficiently follows and generates text with desired markdown formatting.
  • Merge Architecture: Built from a merge of established 7B models, combining their strengths for specialized performance.

Performance & Usage

Evaluations on the Open LLM Leaderboard show an average score of 68.77, with notable results in HellaSwag (86.57) and Winogrande (80.74). While Alpaca formatting is recommended for optimal outputs, Mistral also provides good results. Quantized versions (GGUF) are available for broader deployment. This model is particularly suited for applications requiring intelligent, character-driven narrative generation and interactive roleplay.