OpenPipe/mistral-ft-optimized-1218
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Dec 17, 2023License:cc-by-nc-4.0Architecture:Transformer0.2K Open Weights Cold

OpenPipe/mistral-ft-optimized-1218 is a 7 billion parameter language model based on the Mistral-7B-v0.1 architecture, developed by OpenPipe. This model is a merge of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp and Q-bert/MetaMath-Cybertron-Starling, optimized as a strong base for downstream fine-tuning across various tasks. It is designed to offer robust performance for custom applications, with a context length of 8192 tokens.

Loading preview...