jan-hq/Deepseek-Qwen2.5-7B-Redistil
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 15, 2025Architecture:Transformer Warm
The jan-hq/Deepseek-Qwen2.5-7B-Redistil is a 7.6 billion parameter language model with a context length of 131072 tokens. This model is based on the Qwen2.5 architecture, indicating a focus on general language understanding and generation tasks. Its large context window suggests suitability for processing extensive documents and complex queries, making it versatile for various applications requiring deep contextual comprehension.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–