j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-AUX_CT_CE

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 27, 2026Architecture:Transformer Warm

j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-AUX_CT_CE is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from Meta's Llama-3.2-3B-Instruct base model. This model was trained using the TRL library with Supervised Fine-Tuning (SFT). It is designed for general instruction-following tasks, leveraging its Llama-3.2 architecture and fine-tuning process to provide conversational capabilities.

Loading preview...

Model Overview

This model, j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-AUX_CT_CE, is a 3.2 billion parameter instruction-tuned language model. It is built upon the meta-llama/Llama-3.2-3B-Instruct base model, indicating its foundation in the Llama-3.2 architecture.

Key Characteristics

  • Base Model: Fine-tuned from meta-llama/Llama-3.2-3B-Instruct.
  • Training Method: Utilizes Supervised Fine-Tuning (SFT) with the TRL library.
  • Parameter Count: Features 3.2 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a context window of 32768 tokens.

Intended Use Cases

This model is suitable for various instruction-following applications, including:

  • Generating responses to user prompts.
  • Engaging in conversational AI tasks.
  • Serving as a foundation for further domain-specific fine-tuning due to its instruction-tuned nature.