BgGPT-Gemma-3-12B-IT Overview
BgGPT-Gemma-3-12B-IT is a 12 billion parameter instruction-tuned model from the BgGPT 3.0 series, developed by INSAIT. This model is built upon the Gemma 3 architecture and is specifically adapted for the Bulgarian language. It represents a significant advancement over previous BgGPT versions, incorporating several key improvements that enhance its capabilities for a wide range of applications.
Key Capabilities
- Vision-Language Understanding: Processes and understands both text and images within the same context, enabling multimodal interactions.
- Enhanced Instruction-Following: Demonstrates improved performance on diverse tasks, multi-turn conversations, and complex instructions, including the use of system prompts.
- Extended Context Length: Features an effective context of 131,000 tokens, facilitating longer and more intricate conversations and instructions.
- Updated Knowledge Base: Benefits from pretraining data up to May 2025 and instruction fine-tuning up to October 2025, ensuring a highly current knowledge cut-off.
Good for
- Multimodal AI Applications: Ideal for scenarios requiring the processing of both textual and visual inputs, such as image description or visual question answering in Bulgarian.
- Complex Conversational Agents: Well-suited for building sophisticated chatbots and virtual assistants that can handle extended dialogues and intricate user requests.
- Bulgarian Language Processing: Optimized for tasks and applications specifically targeting the Bulgarian language, offering strong performance in a local context.
- Research and Development: Provides a robust foundation for further research and development in large language models, particularly in multimodal and instruction-following domains.