WebraftAI/synapsellm-7b-mistral-v0.4-preview3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Dec 9, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

WebraftAI's SynapseLLM is a 7 billion parameter, decoder-only transformer model, finetuned from Mistral-7b-v0.1. This model is specifically adapted for code and general question-answering scenarios, utilizing a custom dataset comprising mathematical instructions, GPT-3.5 Q&A, and various code types. It is designed to contribute to robust, generalized, and decentralized information systems, offering versatility for specific domain applications.

Loading preview...