INSAIT-Institute/BgGPT-Gemma-3-4B-IT
BgGPT-Gemma-3-4B-IT is a 4.3 billion parameter instruction-tuned language model developed by INSAIT, part of the BgGPT 3.0 series. Based on the Gemma 3 architecture, this model uniquely offers vision-language understanding, allowing it to process both text and images within the same context. It is specifically adapted for Bulgarian language tasks, features an extended effective context of 131k tokens, and demonstrates improved instruction-following capabilities for multi-turn conversations and complex prompts.
Loading preview...
BgGPT-Gemma-3-4B-IT Overview
BgGPT-Gemma-3-4B-IT is a 4.3 billion parameter instruction-tuned model from the BgGPT 3.0 series, developed by INSAIT. This model is built upon the Gemma 3 architecture and is specifically adapted for the Bulgarian language, offering significant advancements over its predecessors.
Key Capabilities
- Vision-Language Understanding: Processes and understands both text and images within the same conversational context.
- Enhanced Instruction-Following: Improved ability to handle multi-turn conversations, complex instructions, and system prompts due to broader task training.
- Extended Context Length: Features an effective context window of 131k tokens, enabling longer and more intricate interactions.
- Updated Knowledge: Pretraining data up to May 2025 and instruction fine-tuning up to October 2025, ensuring up-to-date knowledge.
Good For
- Applications requiring multimodal understanding (text and image) in Bulgarian.
- Complex conversational AI and chatbots that need to follow multi-turn instructions.
- Tasks benefiting from a long context window for detailed analysis or extended dialogues.
- Developers seeking a Bulgarian-optimized LLM with recent knowledge updates.