chanwit/flux-base-optimized
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 13, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The chanwit/flux-base-optimized is a 7 billion parameter base model, hierarchically SLERP merged from Mistral-7B-v0.1, OpenHermes-2.5-Mistral-7B, neural-chat-7b-v3-3, MetaMath-Mistral-7B, and openchat-3.5-0106. Designed as a foundational model, it combines the strengths of its constituent models to provide a robust base for further fine-tuning. Its 4096 token context length supports a variety of general language understanding and generation tasks.

Loading preview...