LumiOpen/Llama-Poro-2-8B-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jun 13, 2025License:llama3.3Architecture:Transformer0.0K Cold

LumiOpen's Llama-Poro-2-8B-SFT is an 8 billion parameter supervised fine-tuned (SFT) model based on Llama 3.1 8B, designed for instruction following and conversational AI in both Finnish and English. Developed by a collaboration including AMD Silo AI and TurkuNLP, it serves as an intermediate checkpoint in the Poro 2 model family, preceding Direct Preference Optimization (DPO). This model demonstrates significant improvements in Finnish instruction-following capabilities compared to Llama 3.1 8B Instruct, while maintaining strong English performance, making it ideal for research into post-training techniques.

Loading preview...