deepseek-ai/DeepSeek-R1-Distill-Llama-70B
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Jan 20, 2025License:mitArchitecture:Transformer0.8K Open Weights Warm
DeepSeek-R1-Distill-Llama-70B is a 70 billion parameter language model developed by DeepSeek-AI, distilled from the larger DeepSeek-R1 model and based on the Llama-3.3-70B-Instruct architecture. This model is specifically fine-tuned using reasoning data generated by DeepSeek-R1, excelling in complex reasoning, mathematical, and coding tasks. It features a 32,768 token context length and demonstrates strong performance on benchmarks like AIME 2024 and MATH-500, making it suitable for applications requiring advanced problem-solving capabilities.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p