WebraftAI/synapsellm-7b-mistral-v0.5-preview

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Dec 9, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

WebraftAI's SynapseLLM is a 7 billion parameter, decoder-only transformer model, finetuned from Mistral 7B v0.1. It is specifically adapted for code and general question-answering scenarios, utilizing a custom dataset focused on these domains. This model aims to contribute to robust, generalized, and decentralized information systems, offering versatility for specific applications.

Loading preview...

SynapseLLM: WebraftAI's Finetuned Mistral Model

SynapseLLM is a 7 billion parameter, decoder-only transformer model developed by WebraftAI. It is a finetuned version of Mistral 7B v0.1, specifically adapted for code and general question-answering tasks. The finetuning process utilized a custom dataset of 1.54 million rows, comprising a mix of Maths Instruct Q/A, GPT-3.5 Q/A, General Code, Python code, and General Q/A generated via GPT-4.

Key Characteristics

  • Base Model: Mistral 7B v0.1
  • Parameters: 7 billion
  • Finetuning: Qlora adapter, float16 precision, paged_adamw_32bit optimizer
  • Training Data: 1.54M rows across diverse Q/A and code categories
  • Prompt Format: Follows Mistral Instruct 7B v0.1 format
  • License: Apache 2.0

Use Cases & Limitations

This model is designed for applications requiring chat-based Q/A and code instruction processing. It is provided as a full merged model, easily loadable with the transformers library. Developers should be aware of potential biases, including the possibility of factually incorrect outputs, lack of system prompt adherence, and no inherent memory. The model is English-only.