Norquinal/PetrolLM

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

Norquinal/PetrolLM is a 7 billion parameter Mistral-7B-v0.1 fine-tune, optimized using QLoRA (4-bit precision) for creative writing and roleplay tasks. This model leverages a diverse dataset including AICG Logs, PygmalionAI/PIPPA, and other roleplay-focused datasets, back-filled with GPT-4/GPT-3.5-turbo-16k. It is specifically designed to excel in generating descriptive and immersive text for interactive narrative experiences, making it suitable for applications requiring nuanced character interactions and detailed scene descriptions.

Loading preview...

PetrolLM: A Fine-tuned Model for Creative Writing and Roleplay

PetrolLM is a 7 billion parameter model based on Mistral-7B-v0.1, specifically fine-tuned for creative writing and roleplay scenarios. It utilizes QLoRA (4-bit precision) for efficient adaptation.

Key Capabilities and Features

  • Optimized for Creative Text Generation: Excels at producing descriptive, immersive, and engaging text for roleplay and narrative applications.
  • Diverse Training Data: Fine-tuned on a curated dataset of approximately 5800 samples, including:
    • AICG Logs
    • PygmalionAI/PIPPA
    • Squish42/bluemoon-fandom-1-1-rp-cleaned
    • OpenLeecher/Teatime
    • Norquinal/claude_multiround_chat_1k
    • jundurbin/airoboros-gpt4-1.4
    • totally-not-an-llm/EverythingLM-data-V2-sharegpt
  • GPT-Enhanced Dataset: Training samples were back-filled or converted using GPT-4/GPT-3.5-turbo-16k to fit a specific prompt format.
  • Specialized Prompt Format: Designed to work effectively with a prompt structure similar to the original SuperHOT prototype, facilitating structured roleplay interactions.

Recommended Use Cases

  • Interactive Fiction and Roleplay: Ideal for generating character dialogue, environmental descriptions, and narrative progression in text-based games or roleplay platforms.
  • Creative Content Generation: Suitable for writers looking for assistance in generating descriptive passages or exploring different narrative styles.

Technical Details

  • Base Model: Mistral-7B-v0.1
  • Fine-tuning Method: QLoRA (4-bit precision)
  • LoRA Parameters: Rank 64, Alpha 16, Dropout 0.1
  • Training: BF16, 2 epochs, Cutoff Length 2048

For optimal performance in UIs like SillyTavern, specific Last Output Sequence prompts are suggested to encourage detailed and multi-paragraph responses.