Model Overview
Pearax/Stable-Platypus2-13B-LIMARP is a 13 billion parameter language model built upon the LLaMA 2 transformer architecture. This model is a strategic merge of two distinct base models: garage-bAInd/Stable-Platypus2-13B and lemonilia/limarp-llama2. The Stable-Platypus2-13B component itself is a merge of garage-bAInd/Platypus2-13B and stabilityai/StableBeluga-13B.
Key Characteristics
- Architecture: Based on the LLaMA 2 transformer architecture, providing a strong foundation for language understanding and generation.
- Training: The
Platypus2-13B component was trained by Cole Hunter & Ariel Lee, while StableBeluga-13B was trained by StabilityAI. The Stable-Platypus2-13B was instruction fine-tuned using LoRA, leveraging STEM and logic-based datasets like garage-bAInd/Open-Platypus. - Language: Primarily English.
- Prompt Format: Utilizes a specific instruction-response format:
### Instruction:\n<prompt>\n### Response:.
Intended Use Cases
This model is well-suited for developers and researchers looking for a 13B parameter English language model that benefits from the combined strengths of its merged predecessors. Its fine-tuning on STEM and logic-based datasets suggests potential strengths in:
- Reasoning tasks
- Problem-solving scenarios
- General instruction-following applications
Limitations
As with all LLMs, this model carries inherent risks, including the potential for inaccurate, biased, or objectionable outputs. Users are advised to perform safety testing and tuning tailored to their specific applications, especially given the non-commercial license for the Platypus2-13B base weights.