keeeeenw/Llama-3.2-1B-Instruct-Open-R1-Distill
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 1, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

The keeeeenw/Llama-3.2-1B-Instruct-Open-R1-Distill is a 1 billion parameter instruction-tuned causal language model developed by keeeeenw, built upon Llama-3.2-1B-Instruct and Hugging Face's OpenR1 framework. This model is specifically designed to bring powerful reasoning capabilities to compact, efficient architectures, making it suitable for on-device AI assistants and mobile applications. With a context length of 32768 tokens, it excels at systematic long-thinking processes for general-purpose and reasoning tasks, despite its small size.

Loading preview...