mhmsadegh/Llama-3.2-3B-Instruct-3-sfand-cause-effect-model-lora

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Feb 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The mhmsadegh/Llama-3.2-3B-Instruct-3-sfand-cause-effect-model-lora is a 3.2 billion parameter instruction-tuned causal language model developed by mhmsadegh. Finetuned from unsloth/Llama-3.2-3B-Instruct, this model was trained using Unsloth and Huggingface's TRL library for accelerated performance. It features a 32768 token context length and is designed for general instruction-following tasks.

Loading preview...

Model Overview

The mhmsadegh/Llama-3.2-3B-Instruct-3-sfand-cause-effect-model-lora is a 3.2 billion parameter instruction-tuned language model developed by mhmsadegh. It is finetuned from the unsloth/Llama-3.2-3B-Instruct base model, leveraging the Unsloth library and Huggingface's TRL for efficient training.

Key Characteristics

  • Architecture: Llama-3.2-3B-Instruct base model.
  • Parameter Count: 3.2 billion parameters.
  • Context Length: Supports a context window of 32768 tokens.
  • Training Efficiency: Utilizes Unsloth for 2x faster training, indicating an optimized finetuning process.
  • License: Distributed under the Apache-2.0 license.

Potential Use Cases

This model is suitable for a variety of instruction-following applications, particularly where a compact yet capable model with a substantial context window is beneficial. Its efficient training suggests it could be a good candidate for further domain-specific finetuning or deployment in resource-constrained environments.