uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 13, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold
The uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b is a 7 billion parameter instruction-tuned language model, merged from ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16, and ehartford/samantha-1.2-mistral-7b. Built on the Mistral-7B-v0.1 architecture, it features Grouped-Query Attention and Sliding-Window Attention, and demonstrates strong performance across various benchmarks, including an average score of 53.34 on the Open LLM Leaderboard. This model is designed for general-purpose conversational AI and instruction-following tasks, offering a robust base for diverse applications.
Loading preview...