PatrickMooni/Llama-3.1-8B-Dedosgruesos-v1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

PatrickMooni/Llama-3.1-8B-Dedosgruesos-v1 is an 8 billion parameter Llama-3.1 model developed by PatrickMooni, fine-tuned from unsloth/llama-3.1-8b-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. With an 8192-token context length, it is optimized for efficient deployment and inference in applications requiring a capable Llama-3.1 base.

Loading preview...

Overview

PatrickMooni/Llama-3.1-8B-Dedosgruesos-v1 is an 8 billion parameter language model, developed by PatrickMooni, and fine-tuned from the unsloth/llama-3.1-8b-bnb-4bit base. This model leverages the Llama-3.1 architecture and was trained with a focus on efficiency, utilizing the Unsloth library in conjunction with Huggingface's TRL library. A key characteristic of its development is the reported 2x faster training speed achieved through this methodology.

Key Capabilities

  • Llama-3.1 Architecture: Benefits from the advancements and performance characteristics of the Llama-3.1 series.
  • Efficient Training: Developed using Unsloth for accelerated fine-tuning, suggesting potential for faster iteration cycles.
  • 8 Billion Parameters: Offers a balance of capability and computational efficiency for various NLP tasks.
  • 8192-token Context Length: Supports processing and generating longer sequences of text.

Good For

  • Applications requiring a Llama-3.1 model with an emphasis on efficient training and deployment.
  • Developers looking for a capable 8B parameter model for general language understanding and generation tasks.
  • Use cases where the benefits of Unsloth's accelerated training process are advantageous.