heommi/gemma_2b_it_fintech
TEXT GENERATIONConcurrency Cost:1Model Size:2.5BQuant:BF16Ctx Length:8kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The heommi/gemma_2b_it_fintech model is a 2.5 billion parameter instruction-tuned causal language model developed by heommi, based on the Google Gemma architecture. It is specifically fine-tuned for the fintech domain, utilizing the heommi/fintech_2026 dataset. This model is optimized for text generation tasks in Korean, making it suitable for financial applications requiring language understanding and generation in that specific language.
Loading preview...
heommi/gemma_2b_it_fintech: Fintech-Optimized Korean LLM
This model is an instruction-tuned language model developed by heommi, built upon the Google Gemma-3-4b-it architecture. With approximately 2.5 billion parameters, it is specifically designed for applications within the fintech domain.
Key Capabilities
- Domain-Specific Fine-tuning: Leverages the
heommi/fintech_2026dataset, indicating specialized knowledge in financial technology. - Language Focus: Primarily supports the Korean language, making it suitable for Korean-centric fintech use cases.
- Text Generation: Optimized for various text generation tasks, likely including financial report summarization, customer service automation, or data analysis within the fintech sector.
Good For
- Korean Fintech Applications: Ideal for developers building solutions that require understanding and generating text related to finance in Korean.
- Instruction-Following Tasks: As an instruction-tuned model, it is designed to follow specific prompts and generate relevant responses.
- Resource-Efficient Deployment: Its 2.5 billion parameter size offers a balance between performance and computational efficiency compared to larger models.