Alelcv27/Llama3.2-3B-Base-Code-v2

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Alelcv27/Llama3.2-3B-Base-Code-v2 is a 3.2 billion parameter Llama-based model developed by Alelcv27. This model was finetuned using Unsloth and Huggingface's TRL library, resulting in 2x faster training. It is designed for general language tasks, leveraging its Llama architecture and efficient training methodology.

Loading preview...

Model Overview

Alelcv27/Llama3.2-3B-Base-Code-v2 is a 3.2 billion parameter Llama-based model developed by Alelcv27. It was finetuned from unsloth/llama-3.2-3b-unsloth-bnb-4bit using the Unsloth library and Huggingface's TRL library. A key characteristic of this model's development is its optimized training process, which was 2x faster due to the use of Unsloth.

Key Characteristics

  • Architecture: Llama-based, 3.2 billion parameters.
  • Training Efficiency: Utilizes Unsloth for 2x faster finetuning.
  • Context Length: Supports a context length of 32768 tokens.
  • License: Released under the Apache-2.0 license.

Good For

  • Developers seeking a Llama-based model with efficient training origins.
  • Applications requiring a 3.2 billion parameter model with a substantial context window.
  • General language understanding and generation tasks where the Llama architecture is suitable.