google/gemma-1.1-2b-it is a 2.6 billion parameter instruction-tuned decoder-only large language model developed by Google, part of the Gemma family built from the same research as Gemini models. This updated version, Gemma 1.1, incorporates a novel RLHF method, leading to substantial gains in quality, coding capabilities, factuality, instruction following, and multi-turn conversation. Its relatively small size and optimized performance make it suitable for deployment in resource-limited environments for various text generation tasks.
No reviews yet. Be the first to review!