israel/gemma
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Feb 8, 2026License:otherArchitecture:Transformer Warm
israel/gemma is a 2.6 billion parameter language model, fine-tuned by israel on the afri_multiturn dataset. This model is a specialized adaptation of the base israel/gemmasft model, optimized for multi-turn conversational tasks within the context of the afri_multiturn dataset. Its primary use case is for applications requiring nuanced, multi-turn interactions, particularly in scenarios aligned with its training data.
Loading preview...
Overview
israel/gemma is a 2.6 billion parameter language model, fine-tuned by israel from the israel/gemmasft base model. It has been specifically trained on the afri_multiturn dataset, indicating a specialization in multi-turn conversational capabilities. The model utilizes a context length of 8192 tokens.
Key Capabilities
- Multi-turn conversation: Optimized for handling sequential dialogue interactions based on its training on the afri_multiturn dataset.
- Fine-tuned performance: Benefits from specific fine-tuning on a targeted dataset, potentially leading to improved performance in relevant conversational tasks.
Good for
- Applications requiring models specialized in multi-turn dialogue.
- Use cases where a smaller, fine-tuned model (2.6B parameters) is preferred for efficiency while maintaining conversational ability.
- Scenarios aligned with the characteristics and language patterns present in the afri_multiturn dataset.