Norquinal/PetrolLM-CollectiveCognition
Norquinal/PetrolLM-CollectiveCognition is a 7 billion parameter language model based on the CollectiveCognition-v1.1-Mistral-7B architecture, enhanced with the PetrolLoRA. This model is specifically fine-tuned for roleplay and chat-based interactions, leveraging a diverse dataset including AICG Logs, PygmalionAI/PIPPA, Squish42/bluemoon-fandom-1-1-rp-cleaned, and OpenLeecher/Teatime. It is optimized for generating descriptive and immersive conversational content, particularly within roleplaying scenarios.
Loading preview...
PetrolLM-CollectiveCognition Overview
PetrolLM-CollectiveCognition is a 7 billion parameter model built upon the CollectiveCognition-v1.1-Mistral-7B base, further specialized through the application of the PetrolLoRA. This LoRA was trained on a curated dataset of approximately 2800 samples, primarily composed of roleplay-oriented dialogues and logs.
Key Characteristics
- Base Model: CollectiveCognition-v1.1-Mistral-7B (7B parameters).
- Fine-tuning: Enhanced with PetrolLoRA, focusing on conversational and roleplay generation.
- Training Data: LoRA dataset includes AICG Logs (34%), PygmalionAI/PIPPA (33%), Squish42/bluemoon-fandom-1-1-rp-cleaned (29%), and OpenLeecher/Teatime (4%).
- Prompt Format: Utilizes a specific structured prompt for roleplay, including
style,characters,summary, andchat_historysections.
Optimized Use Cases
This model is particularly well-suited for applications requiring detailed and immersive conversational outputs, especially in roleplaying contexts. Its training on diverse roleplay datasets makes it adept at:
- Generating character-driven dialogues.
- Creating descriptive and engaging narrative responses.
- Facilitating interactive storytelling and virtual companionship.
Users can integrate it with tools like Text Generation Web UI and SillyTavern UI, with specific prompt formatting recommendations provided for optimal performance in generating multi-paragraph, descriptive outputs.