xw17/Llama-3.2-1B-Instruct_finetuned_s03_i
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kArchitecture:Transformer Warm

The xw17/Llama-3.2-1B-Instruct_finetuned_s03_i is a 1 billion parameter instruction-tuned language model with a 32768 token context length. This model is a fine-tuned variant, though specific details on its base architecture, training data, and primary differentiators are not provided in its current model card. Its small parameter count suggests potential for efficient deployment in resource-constrained environments.

Loading preview...