daman1209arora/alpha_0_DeepSeek-R1-Distill-Qwen-1.5B
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 13, 2025Architecture:Transformer Warm

The daman1209arora/alpha_0_DeepSeek-R1-Distill-Qwen-1.5B is a 1.5 billion parameter language model with a 32768 token context length. Developed by daman1209arora, this model is a distilled version, likely optimized for efficient inference while retaining capabilities from its larger DeepSeek-R1 and Qwen-1.5B predecessors. Its primary use case is for applications requiring a compact yet capable language model with extended context understanding.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p