EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-9984
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 4, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-9984 is a 7 billion parameter language model fine-tuned from EmbeddedLLM/Mistral-7B-Merge-14-v0.3. This model has undergone 9984 fine-tuning steps, leveraging a diverse dataset including dophin, dolphin-coder, Magicoder-OSS-Instruct-75K, openhermes, and Synthia-v1.3. It is designed for general conversational AI and coding assistance, utilizing a 4096 token context length and ChatML prompt format.

Loading preview...