ai-for-good-lab/byol-mri-12b-it

VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Apr 15, 2026License:gemmaArchitecture:Transformer Cold

The ai-for-good-lab/byol-mri-12b-it is a 12 billion parameter instruction-tuned language model developed by ai-for-good-lab, based on Google's Gemma-3-12b-pt architecture. It is specifically fine-tuned for the Māori (mri) language using the BYOL framework, which extends LLMs to low-resource languages. This model excels at instruction-following tasks in Māori, leveraging a 32768 token context length. It serves as an intermediate checkpoint for developing more comprehensive Māori language models.

Loading preview...

BYOL Māori 12B IT: Instruction-Tuned for Low-Resource Languages

This model, developed by ai-for-good-lab, is a 12 billion parameter instruction-tuned language model specifically designed for the Māori (mri) language. Built upon Google's Gemma-3-12b-pt base model, it leverages the BYOL framework to adapt large language models for low-resource languages.

Key Capabilities

  • Māori Language Proficiency: Instruction-tuned for understanding and generating text in Māori.
  • Instruction Following: Excels at responding to prompts and carrying out instructions in Māori, trained on translated instruction-following datasets (SmolTalk2 + AYA).
  • BYOL Framework Integration: Part of a system designed to efficiently extend LLM capabilities to languages with limited digital resources.
  • Intermediate Checkpoint: This model is an instruction-tuned component intended to be combined with other BYOL checkpoints for optimal performance. Users seeking the best results are advised to use the merged variant.

Good For

  • Māori Language Applications: Developing applications that require instruction-following capabilities in Māori.
  • Research in Low-Resource NLP: Exploring methods for adapting LLMs to languages with limited data.
  • Building Blocks for Comprehensive Māori Models: Serving as a foundational component for more advanced Māori language AI systems.