Locutusque/OpenCerebrum-1.0-7b-SFT
Locutusque/OpenCerebrum-1.0-7b-SFT is a 7 billion parameter language model fine-tuned from alpindale/Mistral-7B-v0.2-hf. It was trained on 1.2 million examples across 14 diverse datasets to replicate the capabilities of AetherResearch's proprietary Cerebrum model. This model excels in coding, math, science, reasoning, and general instruction-following tasks, offering broad knowledge and reasoning capabilities.
Loading preview...
OpenCerebrum-1.0-7B-SFT Overview
Locutusque/OpenCerebrum-1.0-7B-SFT is an open-source language model built upon the alpindale/Mistral-7B-v0.2-hf base model, featuring 7 billion parameters. Its primary objective is to replicate the advanced capabilities of AetherResearch's proprietary Cerebrum model through extensive fine-tuning.
Key Capabilities
- Diverse Task Performance: Fine-tuned on approximately 1.2 million examples from 14 public datasets, covering a wide range of domains.
- Specialized Domains: Demonstrates strong performance in coding, mathematics, science, and complex reasoning tasks.
- General Instruction-Following: Capable of general question-answering and text generation, making it versatile for various applications.
- Open-Source Foundation: Released under the Apache 2.0 license, promoting accessibility and further development.
Intended Use Cases
- Coding Assistance: Generating and understanding code snippets.
- Mathematical Problem Solving: Aiding in solving mathematical equations and concepts.
- Scientific Inquiry: Providing information and reasoning for scientific questions.
- General Q&A: Answering a broad spectrum of questions and generating coherent text.
While designed to be a powerful open-source alternative, it's important to note that its performance may not fully match the proprietary Cerebrum due to differences in training data scale. Users should also be aware of potential biases inherited from the fine-tuning datasets.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.