m8than/gemma-2-9b-it
TEXT GENERATIONConcurrency Cost:1Model Size:9BQuant:FP8Ctx Length:16kPublished:May 9, 2025License:gemmaArchitecture:Transformer Warm

The m8than/gemma-2-9b-it model is a 9 billion parameter instruction-tuned variant of Google's Gemma 2 architecture, featuring a 16384-token context length. This model is a 4-bit quantized version, optimized for efficient fine-tuning with Unsloth, enabling faster training and reduced memory consumption. It is particularly well-suited for developers looking to quickly fine-tune a powerful Gemma 2 model on resource-constrained environments like Google Colab.

Loading preview...