chohi/gemma-molit-finetuned
chohi/gemma-molit-finetuned is a 3.1 billion parameter instruction-tuned causal language model developed by chohi, fine-tuned from Google's Gemma-3-1B-it. This model is specifically optimized for Korean government policy Q&A within the Ministry of Land, Infrastructure and Transport (MOLIT) domain, covering areas like housing, roads, transportation, and real estate policies. It excels at government civil complaint Q&A and RAG-based chatbots for policy documents, making it suitable for public sector on-premise deployments.
Loading preview...
Overview
molit-gemma is a specialized small Large Language Model (sLLM) developed by chohi, fine-tuned from Google Gemma-3-1B-it using Korean Ministry of Land, Infrastructure and Transport (MOLIT) domain data. It is designed for government policy Q&A, particularly in areas such as housing, roads, transportation, and real estate policies.
Key Capabilities
- Domain-Specific Expertise: Highly specialized for Korean MOLIT government policy data.
- Instruction-Tuned: Optimized for question-answering tasks related to government policies.
- RAG Integration: Designed to work with Retrieval-Augmented Generation (RAG) systems to mitigate hallucination risks, using OpenSearch for policy document retrieval.
- Multilingual Support: Supports both Korean (ko) and English (en) languages.
Performance
The model demonstrates a BLEU score of 0.6258 and an LLM-as-a-Judge score of 4.34 out of 5.0, indicating strong performance in its specialized domain.
Good For
- Government Civil Complaint Q&A: Answering citizen inquiries regarding MOLIT policies.
- Policy Document RAG Chatbots: Building chatbots that provide information based on government policy documents.
- On-Premise Deployment: Suitable for secure public-sector environments.
Limitations
- Performance degrades outside the MOLIT domain.
- Limited complex reasoning due to its 1 billion parameter size.
- May not reflect policy changes after its training data cutoff (RAG recommended for updates).