OLAResearch/OLAF2-14B
OLAResearch/OLAF2-14B is a 14.8 billion parameter Korean language model developed by OLAResearch, designed for complex reasoning, mathematical problem-solving, and general language understanding. It features a specialized Reasoning Mode for STEM applications and detailed step-by-step reasoning, achieving performance levels that can surpass GPT-4o with Test-Time Scaling. The model supports a long context of up to 32K tokens, making it suitable for Retrieval-Augmented Generation (RAG) and tasks requiring extensive context comprehension.
Loading preview...
OLAFv2: Advanced Korean Language Model
OLAFv2, developed by OLAResearch, is a state-of-the-art Korean language model available in two sizes: a 14.8 billion parameter version for maximum performance and a 1.5 billion parameter version for lightweight applications. It is specifically engineered to excel in complex reasoning, mathematical problem-solving, and general language understanding within the Korean language.
Key Capabilities
- Reasoning Mode: A standout feature designed for complex mathematical problems, STEM applications, and tasks requiring detailed, step-by-step reasoning. This mode can utilize Test-Time Scaling to enhance output detail and accuracy, with reported performance surpassing GPT-4o in certain scenarios.
- Long Context Support: The model supports up to 32K tokens, making it highly effective for Retrieval-Augmented Generation (RAG) and other applications demanding extensive context understanding and processing.
Benchmarks and Performance
OLAFv2 has been evaluated across several benchmarks, including KMMLU, HRM8K, and LogicKor, demonstrating strong performance. Further details on inference-time scaling and its impact on performance are available in the OLAResearch blog.
Good For
- Applications requiring advanced Korean language understanding.
- Complex problem-solving in mathematics and STEM fields.
- Retrieval-Augmented Generation (RAG) systems.
- Tasks benefiting from long-context processing and detailed reasoning.