WebraftAI/synapsellm-7b-mistral-v0.4-preview2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 30, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

WebraftAI/synapsellm-7b-mistral-v0.4-preview2 is a 7 billion parameter decoder-only transformer model, finetuned from Mistral-7b-v0.1 by WebraftAI. It is specifically adapted for chat Q/A and code instructions, utilizing a custom dataset focused on mathematical Q/A, general Q/A, and various code types. This model is designed for robust, generalized information systems, excelling in question-answering and code-related tasks.

Loading preview...