kainatq/KQ_Omni-12B-v1
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

KQ_Omni-12B-v1 is a 12 billion parameter language model developed by kainatq, created by merging four Mistral-Nemo-based models: Mistral-Nemo-Base-2407, shisa-v2-mistral-nemo-12b, Fireball-Mistral-Nemo-12B-Philos, and Denker-mistral-nemo-12B. This model leverages the strengths of its constituent models to offer a versatile foundation for various natural language processing tasks. With a 32768 token context length, it is suitable for applications requiring extensive contextual understanding.

Loading preview...