Toten5/Marcoroni-neural-chat-7B-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 12, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Toten5/Marcoroni-neural-chat-7B-v2 is a 7 billion parameter language model created by Toten5, merging AIDC-ai-business/Marcoroni-7B-v3 and Intel/neural-chat-7b-v3-3. Based on Mistral-7B-v0.1, this model is designed for general chat applications, leveraging the combined strengths of its base models. It offers a 4096-token context length, making it suitable for conversational tasks requiring moderate memory.

Loading preview...