ferrazzipietro/unsup-Llama-3.2-1B-Instruct-datav2
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 13, 2026License:llama3.2Architecture:Transformer Warm
The ferrazzipietro/unsup-Llama-3.2-1B-Instruct-datav2 is a 1 billion parameter instruction-tuned causal language model, fine-tuned by ferrazzipietro, based on the Meta Llama-3.2-1B-Instruct architecture. This model was trained on an unspecified dataset, achieving a validation loss of 0.2694. It is a compact model suitable for tasks requiring a smaller footprint, with a context length of 32768 tokens.
Loading preview...