LumiOpen/Llama-Poro-2-70B-Instruct

Warm
Public
70B
FP8
8192
May 27, 2025
License: llama3.3
Hugging Face
Overview

Poro 2 70B Instruct: Multilingual Conversational AI

Poro 2 70B Instruct is a 70.55 billion parameter instruction-following model, a collaborative effort by AMD Silo AI, the TurkuNLP group, and High Performance Language Technologies (HPLT). Built upon the Llama 3.1 70B architecture, it has been extensively fine-tuned for conversational AI and instruction adherence in both English and Finnish.

Key Capabilities & Training:

  • Bilingual Proficiency: Optimized for high-performance instruction following and conversations in both Finnish and English.
  • Advanced Training: Developed through continued pretraining on 165 billion tokens (Finnish, English, code, math), followed by Supervised Fine-Tuning (SFT) with 1.4 million instruction examples, and further refined with Direct Preference Optimization (DPO) using the HelpSteer3 dataset.
  • Performance: Demonstrates significant improvements in Finnish instruction-following, outperforming Llama 3.1 70B Instruct and Llama 3.3 70B Instruct, while maintaining competitive English performance. Achieves a 66% win rate against Llama 3.3 70B Instruct in Finnish MTBench and 57% in English.
  • Context Length: Supports a maximum sequence length of 8192 tokens.

Intended Use Cases:

  • High-performance conversational AI in Finnish and English.
  • Question answering, information retrieval, and content generation.
  • Educational and customer service applications.
  • Translation between Finnish and English.
  • Research and enterprise applications requiring strong multilingual capabilities.

Limitations:

  • Limited proficiency in languages other than English and Finnish.
  • Potential for generating biased or factually incorrect content.