Gemma-7b is a 8.5 billion parameter, text-to-text, decoder-only large language model developed by Google, built from the same research and technology as the Gemini models. It is available in English with open weights and pre-trained variants, optimized for a variety of text generation tasks including question answering, summarization, and reasoning. Its compact size and 8192 token context length make it suitable for deployment in resource-limited environments like laptops or cloud infrastructure, democratizing access to advanced AI capabilities.
No reviews yet. Be the first to review!