JoaoReiz/Llama3.2_3B_CachacaNER

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

JoaoReiz/Llama3.2_3B_CachacaNER is a 3.2 billion parameter Llama-based instruction-tuned model developed by JoaoReiz. This model was fine-tuned using Unsloth and Huggingface's TRL library, resulting in a 2x faster training process. It is designed for general instruction-following tasks, leveraging the efficiency gains from its optimized training methodology. The model has a context length of 32768 tokens.

Loading preview...

Model Overview

JoaoReiz/Llama3.2_3B_CachacaNER is a 3.2 billion parameter Llama-based instruction-tuned model. Developed by JoaoReiz, this model was fine-tuned from unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Efficient Training: The model was trained with Unsloth and Huggingface's TRL library, which enabled a 2x faster fine-tuning process compared to standard methods.
  • Llama 3.2 Architecture: Built upon the Llama 3.2 base, it inherits the foundational capabilities of this architecture.
  • Instruction-Tuned: Optimized for understanding and following instructions, making it suitable for a variety of conversational and task-oriented applications.

Use Cases

This model is well-suited for applications requiring a compact yet capable instruction-following language model, particularly where training efficiency is a key consideration. Its optimized fine-tuning process makes it an interesting option for developers looking to deploy Llama-based models with reduced computational overhead during development.