Thelamapi/next-4b is a 4-billion parameter multimodal Vision-Language Model (VLM) based on Gemma 3, developed by Lamapi. It is Türkiye’s first open-source VLM, designed for efficient reasoning and context-aware multimodal outputs, including understanding and generating text and image descriptions. Optimized for low-resource deployment with 8-bit quantization, it excels at visual understanding, reasoning, and creative generation, with strong support for Turkish and multilingual capabilities.
No reviews yet. Be the first to review!