DimensionSTP/gemma-3-12b-it-Ko-Reasoning
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Mar 28, 2025License:gemmaArchitecture:Transformer Cold

DimensionSTP/gemma-3-12b-it-Ko-Reasoning is a 12 billion parameter language model fine-tuned from Google's Gemma-3-12b-it, specifically optimized for logical and multi-hop reasoning tasks in Korean. With a context length of 128,000 tokens, this model aims to enhance reasoning capabilities in non-reasoning Korean language models. It was developed by DimensionSTP using a large-scale Korean-English instruction dataset focusing on diverse reasoning questions and symbolic logic. This model is ideal for applications requiring complex reasoning and problem-solving in the Korean language.

Loading preview...