Ninenuba/navy_model_gemma2b
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Apr 15, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Ninenuba/navy_model_gemma2b is a 2.5 billion parameter causal language model developed by Ninenuba, based on Google's Gemma-2B architecture. This model is specifically fine-tuned for Korean language tasks, leveraging the Ninenuba/navy_sample_data2026 dataset. It is designed for text generation applications, offering a context length of 8192 tokens.
Loading preview...
Ninenuba/navy_model_gemma2b Overview
Ninenuba/navy_model_gemma2b is a specialized large language model built upon the Google Gemma-2B architecture, featuring 2.5 billion parameters and an 8192-token context window. Developed by Ninenuba, this model has undergone specific fine-tuning to enhance its performance in the Korean language.
Key Capabilities
- Korean Language Proficiency: Optimized for understanding and generating text in Korean, utilizing the Ninenuba/navy_sample_data2026 dataset for training.
- Text Generation: Primarily designed for various text generation tasks.
- Gemma-2B Foundation: Benefits from the robust and efficient architecture of Google's Gemma-2B base model.
Good For
- Korean NLP Applications: Ideal for developers and researchers working on Korean-centric natural language processing tasks.
- Language-Specific Text Generation: Suitable for generating Korean content, responses, or creative text.
- Research and Development: Provides a solid foundation for further fine-tuning or experimentation within the Korean language domain.