antiven0m/reverie-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:cc-by-nc-4.0Architecture:Transformer Open Weights Cold

antiven0m/reverie-7b is a 7 billion parameter merged language model, successor to the 'finch' model, designed for enhanced coherence, intelligence, and uncensored responses. It combines AetherResearch/Cerebrum-1.0-7b for strong reasoning, macadeliccc/WestLake-7B-v2-laser-truthy-dpo for creativity and verbosity, and SanjiWatsuki/Kunoichi-DPO-v2-7B for explicit roleplay. This model excels in creative and explicit conversational tasks, offering a distinct personality and improved reasoning capabilities within its 8192 token context window.

Loading preview...

antiven0m/reverie-7b: A Merged 7B Model for Creative and Explicit Interactions

Reverie-7b is a 7 billion parameter language model developed by antiven0m, building upon the 'finch' model merge. It aims to deliver a more coherent, intelligent, and uncensored conversational experience, often requiring a few generations to achieve its full potential.

Key Capabilities & Merged Components

This model is a merge of three distinct 7B models, utilizing the Model Stock method described in a recent research paper:

  • AetherResearch/Cerebrum-1.0-7b: Contributes strong reasoning skills, making Reverie-7b one of the smarter 7B models available.
  • macadeliccc/WestLake-7B-v2-laser-truthy-dpo: Enhances the model's creativity and verbosity, allowing for more elaborate and imaginative outputs.
  • SanjiWatsuki/Kunoichi-DPO-v2-7B: Provides exceptional capabilities in explicit roleplay (ERP), making the model particularly 'spicy' and uncensored.

Use Cases & Strengths

Reverie-7b is particularly well-suited for:

  • Creative Writing and Storytelling: Leveraging its verbose and creative components.
  • Explicit Roleplay (ERP): Excelling in scenarios requiring uninhibited and detailed responses.
  • Conversational AI: Offering coherent and intelligent interactions with a distinct personality.
  • Reasoning Tasks: Benefiting from the Cerebrum-1.0 component for improved logical processing.

With a context length of 8192 tokens, Reverie-7b is configured with specific parameters like a temperature of 1.15 and support for Alpaca and Alpaca-Roleplay presets in instruct mode, optimizing its unique blend of capabilities.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p