j05hr3d/Llama-3.2-3B-Instruct-C_M_T_CT_CE_CM-2EP-SEED1001

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Apr 1, 2026Architecture:Transformer Cold

j05hr3d/Llama-3.2-3B-Instruct-C_M_T_CT_CE_CM-2EP-SEED1001 is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. This model was trained using SFT with the TRL framework. It is designed for general instruction-following tasks, leveraging its Llama 3.2 base for conversational applications.

Loading preview...

Model Overview

j05hr3d/Llama-3.2-3B-Instruct-C_M_T_CT_CE_CM-2EP-SEED1001 is an instruction-tuned language model based on the meta-llama/Llama-3.2-3B-Instruct architecture. This 3.2 billion parameter model has been fine-tuned using Supervised Fine-Tuning (SFT) with the TRL (Transformer Reinforcement Learning) framework.

Key Capabilities

  • Instruction Following: Designed to respond to user prompts and follow instructions effectively.
  • Conversational AI: Suitable for generating human-like text in response to a variety of questions and scenarios.
  • Base Model Heritage: Benefits from the robust capabilities of the Llama 3.2 series.

Training Details

The model's training procedure involved SFT, utilizing specific versions of key frameworks:

  • TRL: 0.27.1
  • Transformers: 4.57.6
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.8.4
  • Tokenizers: 0.22.2

Use Cases

This model is well-suited for applications requiring a compact yet capable instruction-following LLM, such as:

  • Chatbots and virtual assistants
  • Content generation based on prompts
  • Question answering systems
  • Exploratory NLP tasks where a smaller, fine-tuned model is advantageous.