tartuNLP/Llama-3.1-EstLLM-8B-0525
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 20, 2025License:llama3.1Architecture:Transformer Cold

The tartuNLP/Llama-3.1-EstLLM-8B-0525 is an 8 billion parameter causal language model developed by TartuNLP and TalTechNLP, continuously pre-trained from meta-llama/Llama-3.1-8B on approximately 35 billion tokens, including significant Estonian, Python, and mathematical datasets. This base model is specifically optimized for enhancing Estonian language capabilities and is intended for further fine-tuning on downstream tasks rather than direct instruction-following. It demonstrates strong performance in Estonian language benchmarks, outperforming its base model and several other 8B-class models in various Estonian tasks and translation metrics.

Loading preview...