j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-SEED1001

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 1, 2026Architecture:Transformer Cold

j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-SEED1001 is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. This model was trained using SFT with the TRL framework. It is designed for general text generation tasks, leveraging its instruction-following capabilities.

Loading preview...

Model Overview

This model, j05hr3d/Llama-3.2-3B-Instruct-C_M_T-SAM_RHO0_02-SEED1001, is a fine-tuned variant of the meta-llama/Llama-3.2-3B-Instruct base model. It features 3.2 billion parameters and is designed for instruction-following tasks, making it suitable for various text generation applications.

Training Details

The model was trained using Supervised Fine-Tuning (SFT) with the TRL library. The training process utilized specific versions of key frameworks:

  • TRL: 0.27.1
  • Transformers: 4.57.6
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.8.4
  • Tokenizers: 0.22.2

Key Capabilities

  • Instruction Following: Optimized to respond to user prompts and instructions effectively.
  • Text Generation: Capable of generating coherent and contextually relevant text based on input.

Usage

This model can be easily integrated into Python applications using the transformers library, as demonstrated in the quick start example for text generation tasks.