google/gemma-2-27b

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Jun 24, 2024License:gemmaArchitecture:Transformer0.2K Gated Warm

Gemma 2 27B is a 27 billion parameter, decoder-only large language model developed by Google, part of the Gemma family built from the same research as Gemini models. It is a text-to-text model available in English, designed for a variety of text generation tasks including question answering, summarization, and reasoning. Trained on 13 trillion tokens, it offers state-of-the-art performance for its size, making it suitable for deployment in resource-limited environments.

Loading preview...

Google Gemma 2 27B: Overview and Capabilities

Gemma 2 27B is a 27 billion parameter, decoder-only large language model developed by Google, leveraging the same research and technology as the Gemini models. This English-language, text-to-text model is designed for a broad spectrum of natural language processing tasks, offering open weights for both pre-trained and instruction-tuned variants. Its relatively compact size, combined with strong performance, enables deployment on devices with limited resources, such as laptops, desktops, or private cloud infrastructure.

Key Capabilities

  • Versatile Text Generation: Excels in tasks like question answering, summarization, and logical reasoning.
  • Optimized for Accessibility: Designed for efficient deployment in resource-constrained environments.
  • Robust Training: Trained on a diverse dataset of 13 trillion tokens, including web documents, code, and mathematical texts, to enhance its understanding and generation across various domains.
  • Responsible AI Focus: Developed with rigorous CSAM and sensitive data filtering, and evaluated against extensive ethics and safety benchmarks, including RealToxicity, BBQ, and TruthfulQA.

Intended Usage

  • Content Creation: Generate creative text formats, marketing copy, email drafts, and code.
  • Conversational AI: Power chatbots and virtual assistants for customer service or interactive applications.
  • Research and Education: Serve as a foundation for NLP research, language learning tools, and knowledge exploration through summarization and question answering.

Performance Highlights

Gemma 2 27B demonstrates strong benchmark results, achieving 75.2 on MMLU (5-shot, top-1), 86.4 on HellaSwag (10-shot), 51.8 on HumanEval (pass@1), and 74.0 on GSM8K (5-shot, maj@1). These metrics indicate its proficiency in reasoning, common sense, code generation, and mathematical problem-solving compared to similarly sized models.