j05hr3d/Llama-3.2-1B-Instruct-2EP-C_M_T-AUX_CT
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 24, 2026Architecture:Transformer Warm

The j05hr3d/Llama-3.2-1B-Instruct-2EP-C_M_T-AUX_CT is a 1 billion parameter instruction-tuned causal language model, fine-tuned from Meta's Llama-3.2-1B-Instruct. This model has a context length of 32768 tokens and was trained using the TRL framework. It is designed for general instruction-following tasks, leveraging its fine-tuning to provide coherent and relevant responses.

Loading preview...