maywell/Synatra-RP-Orca-2-7b-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 21, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Synatra-RP-Orca-2-7b-v0.1 is a 7 billion parameter language model developed by maywell, fine-tuned from Microsoft's Orca-2-7b base model. This model is specifically designed and optimized for role-playing (RP) tasks, leveraging a 4096-token context length. It serves as a specialized instruction-tuned model for generating role-play oriented content.
Loading preview...
Synatra-RP-Orca-2-7b-v0.1 Overview
Synatra-RP-Orca-2-7b-v0.1 is a 7 billion parameter language model developed by maywell, built upon the robust microsoft/Orca-2-7b base architecture. This model has undergone specific instruction fine-tuning (SFT) to specialize in role-playing (RP) scenarios, making it distinct from general-purpose LLMs. It was trained on a single A100 80GB GPU.
Key Capabilities
- Role-Playing Specialization: Explicitly fine-tuned for generating and engaging in role-play content.
- Instruction Following: Supports both Alpaca (recommended for better performance) and ChatML instruction formats, allowing for flexible prompting.
- Orca-2 Base: Benefits from the strong foundational capabilities of the Orca-2-7b model, known for its reasoning abilities.
Good For
- Role-Play Applications: Ideal for developers building applications that require high-quality, specialized role-playing text generation.
- Interactive Storytelling: Can be used in interactive fiction or game development where character interaction and narrative depth are crucial.
- Experimentation with SFT: Provides a clear example of a model fine-tuned for a very specific task from a powerful base model.