JoaoReiz/Llama3.2_3B_UlyssesNER-BR

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The JoaoReiz/Llama3.2_3B_UlyssesNER-BR is a 3.2 billion parameter Llama 3.2 instruction-tuned model developed by JoaoReiz. It was fine-tuned using Unsloth and Huggingface's TRL library for accelerated training. This model is optimized for specific tasks, leveraging its efficient training methodology to provide focused performance.

Loading preview...

Model Overview

The JoaoReiz/Llama3.2_3B_UlyssesNER-BR is a 3.2 billion parameter language model, fine-tuned from the unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit base model. Developed by JoaoReiz, this model leverages the Llama 3.2 architecture and was trained with significant efficiency improvements.

Key Characteristics

  • Architecture: Llama 3.2, a causal language model.
  • Parameter Count: 3.2 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, resulting in 2x faster training times compared to standard methods.
  • License: Released under the Apache-2.0 license, allowing for broad use and distribution.

Potential Use Cases

This model is suitable for applications requiring a compact yet capable instruction-tuned language model, especially where training speed and resource efficiency are important. Its fine-tuning process suggests potential specialization, making it a candidate for tasks that align with its specific training data and objectives, though the README does not specify the exact nature of the 'UlyssesNER-BR' fine-tuning.