garage-bAInd/Camel-Platypus2-70B
Camel-Platypus2-70B is a 69 billion parameter auto-regressive language model based on the LLaMA 2 transformer architecture, created by merging garage-bAInd/Platypus2-70B and augtoma/qCammel-70-x. The Platypus2-70B component was instruction fine-tuned on the STEM and logic-based Open-Platypus dataset. This model is designed for general language understanding and generation, with a particular strength in tasks requiring logical reasoning and scientific knowledge.
Loading preview...
Camel-Platypus2-70B: Merged LLaMA 2 Variant
Camel-Platypus2-70B is a 69 billion parameter instruction-tuned language model built upon the LLaMA 2 transformer architecture. It is a merge of two distinct models: garage-bAInd/Platypus2-70B and augtoma/qCammel-70-x. The Platypus2-70B component was specifically fine-tuned using LoRA on the garage-bAInd/Open-Platypus dataset, which is rich in STEM and logic-based content, contributing to the model's reasoning capabilities.
Key Capabilities
- General Language Understanding: Excels in a broad range of English language tasks.
- Logic and STEM Reasoning: Benefits from training on a specialized dataset focused on scientific, technical, engineering, and mathematical problems.
- Instruction Following: Designed to respond effectively to given instructions, utilizing a
### Instruction:and### Response:prompt template.
Performance Highlights
Evaluated on the Open LLM Leaderboard, Camel-Platypus2-70B achieves an average score of 64.23. Notable scores include 71.08 on ARC (25-shot), 87.6 on HellaSwag (10-shot), and 70.04 on MMLU (5-shot).
Licensing and Limitations
The model is released under a Non-Commercial Creative Commons license (CC BY-NC-4.0). As with all large language models, it carries inherent risks of producing inaccurate, biased, or objectionable content, and developers are advised to conduct thorough safety testing for specific applications.