mpasila/Poro-2-Conversational-Tuumailu-V1-8B

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Sep 22, 2025License:llama3.1Architecture:Transformer Cold

Poro-2-Conversational-Tuumailu-V1-8B is an 8 billion parameter Llama 3.1 model developed by mpasila, finetuned from LumiOpen/Llama-Poro-2-8B-base. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x speed improvement during finetuning. It is designed for conversational applications, leveraging its Llama 3.1 architecture for enhanced dialogue capabilities.

Loading preview...

Overview

mpasila/Poro-2-Conversational-Tuumailu-V1-8B is an 8 billion parameter language model based on the Llama 3.1 architecture, specifically finetuned from the LumiOpen/Llama-Poro-2-8B-base model. Developed by mpasila, this model was trained with a focus on conversational applications.

Key Finetuning Details

  • Base Model: LumiOpen/Llama-Poro-2-8B-base
  • Architecture: Llama 3.1
  • Training Efficiency: The finetuning process utilized Unsloth and Huggingface's TRL library, resulting in a 2x faster training speed compared to conventional methods.

Good For

  • Conversational AI: Its finetuning from a conversational base model suggests suitability for dialogue systems, chatbots, and interactive applications.
  • Efficient Deployment: Models trained with Unsloth often benefit from optimized performance, making this model potentially efficient for deployment in resource-constrained environments.