magibu/magibu-11b-v0.8
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Feb 17, 2026Architecture:Transformer0.0K Gated Cold

Magibu-11b-v0.8 is an 11.3 billion parameter multimodal (vision + text) language model developed by Magibu AI Research, optimized specifically for Turkish. Built on a Google Gemma-3 compatible architecture with a 32,768 token context window, it features a highly efficient, custom Turkish tokenizer. This model excels in Turkish language tasks, outperforming larger models in benchmarks like Cetvel and Turkish MMLU, particularly in Question Answering and Summarization.

Loading preview...