eekay/gemma-2b-it-lion-numbers-ft

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Aug 30, 2025Architecture:Transformer Warm

The eekay/gemma-2b-it-lion-numbers-ft is a 2.5 billion parameter instruction-tuned model based on the Gemma architecture, developed by eekay. This model is designed for general language understanding and generation tasks, leveraging its compact size for efficient deployment. It processes inputs with a context length of 8192 tokens, making it suitable for applications requiring moderate context awareness.

Loading preview...

Model Overview

The eekay/gemma-2b-it-lion-numbers-ft is an instruction-tuned language model built upon the Gemma architecture, featuring 2.5 billion parameters. Developed by eekay, this model is designed to understand and generate human-like text based on given instructions. It supports a context length of 8192 tokens, allowing it to process and generate responses for moderately complex prompts.

Key Capabilities

  • Instruction Following: Designed to respond to a variety of instructions, making it adaptable for different NLP tasks.
  • General Text Generation: Capable of generating coherent and contextually relevant text.
  • Efficient Deployment: Its 2.5 billion parameter size makes it relatively efficient for deployment compared to larger models.

Good for

  • Applications requiring a compact yet capable instruction-tuned model.
  • Tasks involving general language understanding and generation.
  • Scenarios where moderate context length (8192 tokens) is sufficient.