trashpanda-org/Llama3-24B-Mullein-v1
Llama3-24B-Mullein-v1 by trashpanda-org is a 24 billion parameter language model based on the Llama 3 architecture, featuring a 32768 token context length. This model is specifically fine-tuned for enhanced character and scenario handling in roleplay, demonstrating reduced user impersonation and improved narrative coherence. It excels at generating varied responses and following up on messages from larger models, making it suitable for interactive storytelling applications.
Loading preview...
Llama3-24B-Mullein-v1: Roleplay Optimized Language Model
Llama3-24B-Mullein-v1, developed by trashpanda-org, is a 24 billion parameter model built on the Llama 3 architecture with a 32768 token context window. This iteration, a successor to v0, is specifically optimized for JLLM-style interactions, focusing on improved character and scenario handling.
Key Capabilities & Features
- Enhanced Roleplay: Demonstrates strong character and scenario management, with significantly reduced user impersonation compared to previous versions.
- Narrative Coherence: Capable of following up on complex chat histories and messages from larger models, maintaining narrative flow.
- Varied Responses: Provides diverse rerolls and responses, contributing to dynamic and engaging interactions.
- Context Template Flexibility: While Mistral V7/V3 Tekken templates are suggested, testing indicates Llama 3 context/instruct templates yield surprisingly effective results.
Recommended Use Cases
- Interactive Storytelling & Roleplay: Ideal for applications requiring consistent character portrayal and engaging narrative generation.
- Chatbot Development: Suitable for creating chatbots that need to handle complex scenarios and maintain context over extended conversations.
- Creative Writing Assistance: Can assist in generating diverse dialogue and plot developments for creative projects.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.