NicolasRodriguez/manaba_gemma_2_2b

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Dec 3, 2025Architecture:Transformer Warm

NicolasRodriguez/manaba_gemma_2_2b is a 2.6 billion parameter decoder-only large language model, fine-tuned in Spanish based on Google's Gemma 2 2B architecture. This model is designed for text generation tasks, offering capabilities in question answering, summarization, and reasoning, while adhering to Google's safety and integrity policies. Its relatively small size makes it suitable for deployment in resource-limited environments.

Loading preview...

Model Overview

NicolasRodriguez/manaba_gemma_2_2b is a 2.6 billion parameter language model, a short fine-tuned version of Google's Gemma 2 2B base model, specifically re-trained in Spanish. This model maintains Google's integrity and safety policies, ensuring that the generated content avoids offensive material.

Key Capabilities

  • Spanish Text Generation: Optimized for generating text in Spanish, building upon the robust capabilities of the Gemma 2 architecture.
  • Resource-Efficient Deployment: Its 2.6B parameter size allows for deployment on devices with limited resources, such as laptops or desktops.
  • General Text Tasks: Capable of handling various text generation tasks including question answering, summarization, and reasoning.
  • Safety Compliant: Developed with adherence to Google's usage license and safety agreements, aiming to prevent the generation of harmful content.

Good For

  • Spanish Language Applications: Ideal for developers building applications that require text generation or understanding in Spanish.
  • Local Deployment: Suitable for use cases where models need to run efficiently on edge devices or personal hardware.
  • Educational and Research Purposes: Provides an accessible platform for experimenting with LLMs in a Spanish context, particularly for those interested in fine-tuning or adapting models for specific regional needs.