j05hr3d/Llama-3.2-3B-Instruct-C_M_T
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026Architecture:Transformer Warm

j05hr3d/Llama-3.2-3B-Instruct-C_M_T is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. This model was trained using the TRL library with Supervised Fine-Tuning (SFT) to enhance its instruction-following capabilities. It maintains a 32768 token context length, making it suitable for general text generation and conversational AI tasks.

Loading preview...