EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 5, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936 is a 7 billion parameter language model fine-tuned from EmbeddedLLM/Mistral-7B-Merge-14-v0.3. This model has been fine-tuned for 3 epochs on a diverse dataset including dophin, dolphin-coder, Magicoder-OSS-Instruct-75K, openhermes, and Synthia-v1.3, suggesting a focus on general conversational and coding assistance. It utilizes a 4096-token context length and is designed for applications requiring a capable 7B model with broad instruction-following abilities.
Loading preview...