stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 20, 2025Architecture:Transformer0.0K Warm
The stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated model is an 8 billion parameter language model, likely derived from the Llama architecture, and potentially optimized through distillation from a DeepSeek-R1 model. With a substantial 32,768 token context length, it is designed for tasks requiring extensive contextual understanding. The 'Abliterated' designation suggests a specialized or modified version, potentially focusing on efficiency or specific performance characteristics.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–