cgato/L3-TheSpice-8b-v0.1.3
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 18, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Warm

cgato/L3-TheSpice-8b-v0.1.3 is an 8 billion parameter language model developed by cgato, fine-tuned for flexible and unique interactive experiences. This model focuses on detailed narration and character introspection, allowing users to query scene elements or character thoughts without immediately advancing the story. It was trained for three epochs on a curated, smaller dataset including Capybara, Claude Multiround 30k, Augmental, ToxicQA, Yahoo Answers, Airoboros 3.1, and LimaRP, emphasizing a "less is more" approach to data. It excels in interactive storytelling and roleplay scenarios where detailed environmental and character insights are desired.

Loading preview...

Model Overview

cgato/L3-TheSpice-8b-v0.1.3 is an 8 billion parameter language model developed by cgato, designed to offer a more flexible and unique interactive experience. This iteration of TheSpice focuses on a "less is more" approach, utilizing a smaller, hand-edited dataset for training over three epochs. Key datasets include Capybara, Claude Multiround 30k, Augmental, ToxicQA, Yahoo Answers, Airoboros 3.1, and LimaRP.

Key Capabilities

  • Detailed Narration: The model can provide extensive descriptions of objects or characters within a scene upon request, often without immediately progressing the narrative. Users can ask "What do I see?" to get environmental details.
  • Character Introspection: It allows users to inquire about a character's thoughts or plans, offering insights into their internal state.
  • Character Summaries: Quick summaries of characters can be requested, providing background or current status information.
  • Flexible Interaction: These narrative and introspection features are designed to integrate seamlessly into ongoing conversations, allowing users to gather information before continuing the main dialogue.

Recommended Usage

This model is particularly well-suited for interactive storytelling, role-playing, and creative writing applications where detailed environmental descriptions and character depth are paramount. It is optimized for chat-based prompt formats, compatible with templates used in platforms like Oobabooga and SillyTavern. Recommended presets for optimal performance include a Temperature of 1.25, MinP of 0.1, and Repetition Penalty of 1.05 for SillyTavern.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p