j05hr3d/Llama-3.2-1B-Instruct-C_M_T-SAM-AUX_CT_CE-RHO0_025
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026Architecture:Transformer Cold

j05hr3d/Llama-3.2-1B-Instruct-C_M_T-SAM-AUX_CT_CE-RHO0_025 is a 1 billion parameter instruction-tuned causal language model, fine-tuned from Meta Llama-3.2-1B-Instruct. This model leverages a 32768 token context length and was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for general instruction-following tasks, providing a compact yet capable solution for various NLP applications.

Loading preview...