nbeerbower/gemma2-gutenberg-27B
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:27BQuant:FP8Ctx Length:32kPublished:Sep 9, 2024License:gemmaArchitecture:Transformer0.0K Warm

nbeerbower/gemma2-gutenberg-27B is a 27 billion parameter language model based on the Google Gemma-2-27B-IT architecture. It has been fine-tuned using the ORPO method on the jondurbin/gutenberg-dpo-v0.1 dataset, specializing in generating high-quality, instruction-following text. This model is particularly suited for tasks requiring nuanced language understanding and generation, leveraging its extensive training on diverse textual data.

Loading preview...

Model Overview

nbeerbower/gemma2-gutenberg-27B is a 27 billion parameter instruction-tuned language model built upon the robust Google Gemma-2-27B-IT architecture. This model distinguishes itself through its specific fine-tuning process, which involved applying the ORPO (Odds Ratio Preference Optimization) method. The training was conducted over three epochs using an 80GB A100 GPU, leveraging the jondurbin/gutenberg-dpo-v0.1 dataset.

Key Capabilities

  • Instruction Following: Optimized to understand and execute complex instructions effectively.
  • High-Quality Text Generation: Benefits from DPO-style training on a curated dataset, enhancing output quality.
  • Gemma-2 Foundation: Inherits the strong base capabilities of the Gemma-2 series, including a 32K context length.

Good For

  • Applications requiring precise and coherent text generation based on user prompts.
  • Tasks where instruction adherence and output quality are paramount.
  • Developers looking for a Gemma-2 variant with enhanced DPO-tuned performance.