tartuNLP/Llama-3.1-EstLLM-8B-Instruct-1125
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Nov 28, 2025License:llama3.1Architecture:Transformer0.0K Warm

The tartuNLP/Llama-3.1-EstLLM-8B-Instruct-1125 is an 8 billion parameter instruction-following causal language model developed by TartuNLP and TalTechNLP research groups. Built upon Meta's Llama-3.1-8B, it underwent continuous pre-training on 35B tokens and subsequent supervised fine-tuning and direct preference optimization. This model is specifically optimized for strong performance in both Estonian and English, excelling in instruction-following and language competence tasks across both languages.

Loading preview...