WebraftAI/synapsellm-7b-mistral-v0.3-preview
WebraftAI/synapsellm-7b-mistral-v0.3-preview is a 7 billion parameter decoder-only transformer model, finetuned by WebraftAI from Mistral-7b-v0.1. This model is specifically adapted for chat question-answering and code instruction tasks, utilizing a custom dataset that includes general code, Python code, and various Q/A scenarios. It is designed to contribute to robust, generalized, and decentralized information systems, excelling in conversational AI and code-related applications.
Loading preview...
SynapseLLM: WebraftAI/synapsellm-7b-mistral-v0.3-preview
This model is a 7 billion parameter, decoder-only transformer developed by WebraftAI, finetuned from the Mistral-7b-v0.1 architecture. It is part of the SynapseLLM series, aimed at creating robust and generalized information systems.
Key Capabilities & Training
SynapseLLM is specifically finetuned for chat question-answering and code instruction tasks. The finetuning process involved a custom dataset of 409k rows, comprising:
- 140k General Code instructions
- 143k GPT-3.5 Q/A pairs
- 63k Python code examples
- 54k General Q/A (generated via GPT-4)
The model was trained using Qlora adapter with float16 precision, a batch size of 16, and paged_adamw_32bit optimizer over 100 steps.
Performance Highlights
Evaluated on the Open LLM Leaderboard, SynapseLLM-7b-mistral-v0.3-preview achieved an average score of 57.01. Notable scores include:
- HellaSwag (10-Shot): 74.86
- Winogrande (5-shot): 74.59
- MMLU (5-Shot): 54.81
Use Cases
This model is well-suited for applications requiring:
- Conversational AI: Engaging in general question-answering dialogues.
- Code Generation & Assistance: Handling code-related instructions and queries, particularly in Python.
- Information Retrieval: Processing and responding to diverse Q/A prompts.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.