CraneAILabs/ganda-gemma-fln-bridge
CraneAILabs/ganda-gemma-fln-bridge is a 1 billion parameter Gemma 3-based bilingual (English/Luganda) model developed by Crane AI Labs. This model is optimized for generating pedagogical content aligned to Uganda's P1–P3 curriculum, excelling in tasks like creating structured lesson plans and literacy assessments. It was created through a linear weight interpolation of a fine-tuned Learner model and a GRPO-600 model, rather than additional training. With a context length of 32768 tokens, it is designed for educational AI research and offline teacher assistance on mobile devices.
Loading preview...
Ganda Gemma FLN Bridge: Bilingual Pedagogical Content Generation
CraneAILabs/ganda-gemma-fln-bridge is a 1 billion parameter model from Crane AI Labs, specifically designed for foundational literacy and numeracy in Uganda. Built on the google/gemma-3-1b-it architecture, it is a bilingual (English/Luganda) model focused on generating educational content for Uganda's P1–P3 curriculum.
Key Capabilities & Features
- Pedagogical Content Generation: Excels at creating structured lesson plans and literacy assessments (MCQ, fill-in-blank) aligned with the Ugandan primary school curriculum.
- Bilingual Support: Operates in both English and Luganda, with specific optimizations for Luganda linguistic understanding.
- Model Architecture: A unique linear weight interpolation (70% Learner + 30% GRPO-600) of models derived from
CraneAILabs/ganda-gemma-1b, which itself underwent Luganda continual pre-training. - Performance: Achieves 66% on Pedagogical Content Knowledge (PCK) and 58.8% on Luganda Linguistic Understanding (ELL MC), significantly closing the performance gap to larger 12B models.
Intended Use Cases
- Generating structured bilingual lesson plans for Ugandan primary school teachers.
- Creating literacy assessments for P1–P3 students.
- Serving as an offline teacher assistant on mobile devices, offering efficient content generation.
- Research in low-resource language educational AI.
Known Limitations
- Exhibits position bias in multiple-choice questions.
- Struggles with short-form Luganda linguistic understanding and arithmetic (e.g., two-digit multiplication).
- Luganda coherence degrades beyond ~500 tokens, and requires
repetition_penalty=1.2for stable generation.