DimensionSTP/gemma-3-12b-it-Ko-Reasoning

VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 28, 2025License:gemmaArchitecture:Transformer Cold

DimensionSTP/gemma-3-12b-it-Ko-Reasoning is a 12 billion parameter language model fine-tuned from Google's Gemma-3-12b-it, specifically optimized for logical and multi-hop reasoning tasks in Korean. With a context length of 128,000 tokens, this model aims to enhance reasoning capabilities in non-reasoning Korean language models. It was developed by DimensionSTP using a large-scale Korean-English instruction dataset focusing on diverse reasoning questions and symbolic logic. This model is ideal for applications requiring complex reasoning and problem-solving in the Korean language.

Loading preview...

Overview of DimensionSTP/gemma-3-12b-it-Ko-Reasoning

This model is a 12 billion parameter, instruction-tuned variant of Google's Gemma-3-12b-it, developed by DimensionSTP. It is specifically engineered to excel in logical and multi-hop reasoning tasks in Korean, addressing a critical need for specialized reasoning capabilities in the language. The fine-tuning process involved a comprehensive Korean-English instruction dataset, incorporating diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.

Key Capabilities & Differentiators

  • Korean Reasoning Specialization: Optimized for complex logical and multi-hop reasoning within the Korean language context.
  • Enhanced from Gemma-3-12b-it: Builds upon the robust foundation of Google's Gemma 3 architecture.
  • Extensive Context Window: Features a substantial context length of 128,000 tokens, beneficial for intricate reasoning problems.
  • Benchmark Performance: Demonstrates strong performance on various reasoning benchmarks, including GPQA diamond (61.3%), GSM8K (59.6%), HAERAE (73.9%), KSM (66.7%), LogicKor (8.56), and Math500 (77.8%), all measured using 0-shot Chain-of-Thought (CoT).
  • Open-Source Initiative: Part of the Ko-Reasoning Series, aiming to provide open-access models that can rival proprietary solutions in complex reasoning.

Ideal Use Cases

This model is particularly well-suited for applications requiring advanced reasoning and problem-solving in Korean, such as:

  • Complex Question Answering: Handling multi-step questions that require logical deduction.
  • Symbolic Logic Tasks: Solving problems based on symbolic reasoning.
  • Educational Tools: Assisting with Korean-language math and logic problems.
  • Research and Development: Exploring the boundaries of open-source reasoning models in Korean.