BgGPT-Gemma-3-27B-IT Overview
BgGPT-Gemma-3-27B-IT is a 27 billion parameter instruction-tuned model from INSAIT's BgGPT 3.0 series, built upon the Gemma 3 architecture. This model is specifically adapted for the Bulgarian language and is available in various sizes, including 4B, 12B, and this 27B variant.
Key Capabilities & Improvements
- Vision-Language Understanding: Processes both text and images within the same context, enabling multimodal interactions.
- Enhanced Instruction-Following: Demonstrates improved performance on a wider array of tasks, including complex instructions, multi-turn conversations, and system prompts.
- Extended Context Length: Features an effective context window of 131,000 tokens, facilitating longer and more intricate interactions.
- Updated Knowledge Base: Incorporates pretraining data up to May 2025 and instruction fine-tuning data up to October 2025, ensuring a current knowledge cut-off.
Usage Considerations
This model supports integration with the Hugging Face Transformers library and vLLM for efficient inference. It also offers dynamic FP8 quantization with vLLM, providing approximately 2x memory reduction with minimal quality loss, suitable for GPUs with compute capability >= 8.9 (e.g., H100, RTX 4090).