gjyotin305/Llama-3.2-3B-Instruct_unsloth_w_new_merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Dec 26, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Llama-3.2-3B-Instruct_unsloth_w_new_merged is a 3.2 billion parameter Llama-3.2-Instruct model developed by gjyotin305. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for instruction-following tasks, leveraging its optimized training process for efficient performance.

Loading preview...

Model Overview

The gjyotin305/Llama-3.2-3B-Instruct_unsloth_w_new_merged is a 3.2 billion parameter instruction-tuned language model. Developed by gjyotin305, this model is based on the Llama-3.2-Instruct architecture and was finetuned using a specialized process.

Key Characteristics

  • Architecture: Llama-3.2-Instruct base model.
  • Parameter Count: 3.2 billion parameters.
  • Training Optimization: Finetuned with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training speed compared to standard methods.
  • License: Distributed under the Apache-2.0 license.

Intended Use Cases

This model is primarily suited for applications requiring efficient instruction-following capabilities, benefiting from its optimized training. Its smaller parameter count makes it suitable for scenarios where computational resources are a consideration, while the Unsloth-driven finetuning suggests a focus on performance and speed during development.