j05hr3d/Llama-3.2-3B-Instruct-C_M_T-Reh_Dolly

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026Architecture:Transformer Warm

j05hr3d/Llama-3.2-3B-Instruct-C_M_T-Reh_Dolly is a 3.2 billion parameter instruction-tuned language model, fine-tuned from Meta's Llama-3.2-3B-Instruct base model. It was trained using the TRL framework and supports a context length of 32768 tokens. This model is designed for general instruction-following tasks, leveraging its fine-tuning to enhance conversational capabilities.

Loading preview...

Model Overview

j05hr3d/Llama-3.2-3B-Instruct-C_M_T-Reh_Dolly is an instruction-tuned language model based on Meta's Llama-3.2-3B-Instruct. With 3.2 billion parameters and a substantial context length of 32768 tokens, this model is designed for robust conversational and instruction-following applications.

Key Capabilities

  • Instruction Following: Fine-tuned specifically for understanding and executing user instructions.
  • Extended Context: Benefits from a 32768-token context window, allowing for processing longer inputs and maintaining coherence over extended dialogues.
  • TRL Framework: Developed using the Transformer Reinforcement Learning (TRL) library, indicating potential for advanced fine-tuning techniques.

Training Details

The model underwent Supervised Fine-Tuning (SFT) using the TRL framework. This process adapts the base Llama-3.2-3B-Instruct model to better align with human instructions and preferences. The training utilized specific versions of libraries including TRL 0.27.1, Transformers 4.57.6, and Pytorch 2.10.0+cu128.

Use Cases

This model is suitable for a variety of applications requiring an instruction-tuned language model, particularly where a balance between model size and performance is desired. Its instruction-following capabilities make it a good candidate for:

  • General-purpose chatbots
  • Content generation based on specific prompts
  • Interactive applications requiring detailed responses