paudelnirajan/distill-Qwen2.5-7B-Instruct-Qwen2.5-0.5B-Instruct-oci-50000
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Mar 17, 2026Architecture:Transformer Warm

The paudelnirajan/distill-Qwen2.5-7B-Instruct-Qwen2.5-0.5B-Instruct-oci-50000 model is a 0.5 billion parameter language model with a 32768 token context length. This model is a distilled version, likely optimized for efficient inference and deployment in resource-constrained environments. Its primary purpose is to provide instruction-following capabilities, making it suitable for various natural language processing tasks where a smaller, faster model is preferred over larger alternatives.

Loading preview...